• Keine Ergebnisse gefunden

Top-Down Modulation of the auditory Steady-State Response

N/A
N/A
Protected

Academic year: 2022

Aktie "Top-Down Modulation of the auditory Steady-State Response"

Copied!
83
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Top-Down Modulation of the auditory Steady-State Response

Diplomarbeit im Fach Psychologie vorgelegt von Thomas Hartmann im September 2006

Erstgutachter: Prof. Dr. Thomas Elbert Zweitgutachter: PD Dr. Andreas Keil

Konstanzer Online-Publikations-System (KOPS) URL: http://www.ub.uni-konstanz.de/kops/volltexte/2006/2015/

(2)

Vielen Dank an...

...Nathan Weisz, der mich fast mein ganzes Studium über als Mentor, Chef und Freund begleitet und gefördert hat. Ihm verdanke ich einen Groÿteil meines neuropsychologischen Wissens und die Motivation, Dinge zu hin- terfragen. Durch seine Fähigkeit, Dinge immer wieder aus neuen Perspek- tiven zu betrachten und seinem Reichtum an Ideen und wie man diese wis- senschaftlich umsetzt, hat er meine Art des Denkens geprägt wie kein anderer.

Ausserdem danke ich ihm für das Probelesen dieser Arbeit und die schnellen und treenden Rückmeldungen, die geholfen haben, die Ergebnisse dieser Arbeit besser herauszustellen.

...Winfried Schlee für das beste Methodentutorium in meinem zweiten Semester, für sehr hilfreiche Kommentare zu meiner Diplomarbeit, neue Denkanstöÿe und vor allem für seine herzliche, fröhliche und unkomplizierte Art. Ich freue mich auf viele gemeinsame Stunden im Büro, auf Kongressen und in der Kneipe!

...Katalin Dohrmann für Denkanstöÿe aus teilweise ganz anderen Richtun- gen, die einen ein Problem auch mal aus anderer Sicht betrachten lassen und eine super Atmosphäre im Büro mit vielen tollen kurzen Pausen.

...Christiane Wolf, Bärbel Awiszus und Ursula Lommen, ohne die nicht eine EEG- oder MEG-Messung laufen würde. Vielen Dank für die Unterstützung in den letzten Jahren und vor allem bei der Datenerhebung für diese Arbeit bei der auch kurzfristige Terminänderungen kein Problem darstellten.

...Christian Wienbruch und Patrick Berg, für einen unglaublichen Schatz an Wissen und Erfahrung, den sie immer bereit sind zu teilen.

...Brigitte Rockstroh und Thomas Elbert für viel Freiheit und das damit verbundene Vertrauen, vor allem für meinen zuküntigen Weg.

...Andreas Keil für den unkomplizierten Zugang zu Literatur der unkom- plizierten Übernahme der Zweitkorrektur.

...Bernhard Ross für den unkomplizierten Zugang zu der Literatur seiner Arbeitsgruppe.

(3)

...Simone Unfried, die kurzfristig die englische Sprache in dieser Arbeit unter die Lupe nahm.

...der Open-Source Gemeinde für das hervorbringen so genialer Pro- gramme wie LATEX, R, EEGLAB, Kubuntu-Linux und vielen anderen.

...Dana Koch für Motivation und Beruhigung und für das Aushalten meiner schlechten Laune sowie für viele Kleinigkeiten, die hier den Rahmen sprengen würden.

...Inge und Bernd Hartmann, meinen Eltern, die mich nicht nur die ganze Zeit mit aller Kraft unterstützt haben, sondern mich auch auf die Idee ge- bracht haben, Psychologie zu studieren.

(4)

Contents

Abstract 6

1 Introduction 7

1.1 The Steady-State Response . . . 8

1.1.1 The nature of the auditory Steady-State Response . . . 9

1.1.2 Parameters of the Steady-State Response . . . 11

1.1.3 Scientic Applications of the Steady-State Response . . 13

1.1.4 Clinical Applications for the Steady-State Response . . 18

1.2 Top-Down versus Bottom-Up contribution to Perception . . . 19

1.2.1 Attention . . . 20

1.2.2 Expectancy . . . 21

1.3 Summary . . . 23

1.4 Aims of the current study . . . 25

2 Material and Methods 26 2.1 Subjects . . . 26

2.2 The Experiment . . . 28

2.2.1 Design . . . 28

2.2.2 Stimuli . . . 29

2.2.3 Procedure . . . 31

2.2.4 Stimulus Presentation and Data Acquisition . . . 33

2.3 Data Analysis and Statistical Analysis . . . 34

2.3.1 Behavioral Data . . . 35

2.3.2 EEG-data . . . 35

3 Results 38 3.1 Behavioral Data . . . 38

(5)

3.2 Event Related Potentials . . . 40

3.2.1 Amplitude . . . 41

3.2.2 Latency . . . 43

3.3 Auditory Steady-State Response . . . 45

3.3.1 Amplitude . . . 45

3.3.2 Source coherence . . . 49

4 Discussion 56 4.1 Behavioral Data . . . 56

4.1.1 Comparison with previous studies . . . 56

4.1.2 Dierences between literature reported and present data 58 4.2 Event-Related Potentials . . . 59

4.2.1 Amplitude-Dierences . . . 59

4.2.2 Latency-Dierences . . . 61

4.3 Steady-State Amplitude . . . 62

4.3.1 Comparison with previous studies . . . 62

4.3.2 No eects in temporal sources . . . 63

4.3.3 Eects in other areas . . . 63

4.4 Phase-Coherence . . . 66

4.4.1 General overview . . . 66

4.4.2 Comparison with previous studies . . . 66

4.4.3 Separate Discussions of the Eects . . . 67

4.5 The role of the right parietal source . . . 70

4.6 The role of the anterior cingulum . . . 71

4.7 The role of the temporal regions . . . 72

4.8 The Design . . . 73

4.9 Conclusion . . . 74

A Source-Montage 82

(6)

Abbreviations

A1 primary auditory cortex AM amplitutde modulated

aSSR auditory Steady-State Response CR Conditioned Response

CS Conditioned Stimulus EEG Electroencephalography

ERD Event related desynchronization ERP Event related potentials

FFT Fast Fourier Transform

IAPS International Aective Picture System LQ Lateralization Quotient

MEG Magnetoencephalography PCA Principle Component Analysis rCBF regional Cerebral Bloodow SNR Signal to Noise Ratio SSF Steady-State Field SSR Steady-State Response US Unconditioned Stimulus

(7)

Abstract

The classication of sensory processes into bottom-up and top-down mech- anisms is generally accepted but the impact of top-down processes on elec- trophysiological measurements is not well understood today. The follow- ing study proposes a design which tries to examine the inuence of top- down processes on the auditory steady-state response which is elicited by an amplitude-modulated tone. Therefore the EEG of twelve subjects was mea- sured while they believed they were solving a very dicult frequency discrim- ination task. In reality however this task was impossible because only one tone was used, thus eliminating the bottom-up inuences that would come from dierences of the stimuli. In order to manipulate expectancies two dif- ferent kinds of auditory feedback were given being organized pseudorandomly so that a dened number of repetitions of equal feedback was achieved fol- lowing the work of Perruchet (1985) [31]. As only one tone was used any changes in the behavioral and the electrophysiological data could only come from cognitive, and thus top-down processes. The analysis of the behav- ioral data showed similar trends as Perruchet (1985) [31] who found that the subjects' rating of the likelihood of the occurance of an aversive stimu- lus depended on the amount of repeated presentation of this stimulus in the past. The electrophysiological data was projected on a source-montage and analyzed using three approaches. First the classical event-related-potentials approach was conducted. This revealed eects of the amplitude at the right frontal source and for the latency at the anterior cingulum. The analysis of the amplitude of the aSSR did not show any signicant eects at the tem- poral sources although this was expected, but turned out to be signicant at the anterior cingulum. These eects show a good resemblance with the behavioral data. The absence of amplitude-eects in temporal regions, which contrasts with preceding studies showed that activities in primary auditory areas cannot be modulated by one single tone but only with the use of two actually dierent stimuli. The phase-coherences showed a large involvement of the right parietal source especially in combination with the cingulum. To conclude the proposed design was able to modulate many parameters of the aSSR although the same stimulus was used in all trials. Therefore it can be assumed that the aSSR can be modulated by top-down processes.

(8)

Chapter 1 Introduction

Every sensory percept, i.e. what we see, hear, feel, taste and smell, can be described by objective physical properties and could be assumed to elicit a constant percept within each individual. Although this view seems true and is rarely questioned by non-psychologists, research has shown that the actual percept not only depends on the current input but also on cognitive variables like attention and expectation, which are to a large part modulated by the context. This leaves the question open whether these changes depending on top-down inuences can be measured by means of cortical activity.

Polley et al. (2004) [35] showed changes in the ring-rate and tonotopic or- ganization of the primary auditory cortex (A1) in rats depending on whether it was important to discriminate a stimulus to get a reward or not. This leads to the assumption that top-down modulation takes place as early as in primary sensory areas.

This chapter gives the conceptual background of my study in which the auditory Steady-State-Response aSSR) is used to investigate whether it is possible to modulate the activity of A1 in humans with a task that eliminates bottom-up inuences as much as possible. Due to the use of a source-montage with eight sources distributed on the whole brain, it will be also possible to monitor other areas to examine the possible activations of whole networks.

(9)

1.1 The Steady-State Response

Analysis of the transient evoked potentials (ERP) via Electroencephalog- raphy (EEG) or Magnetoencephalography (MEG) is the most widespread approach in examining basic sensory responses. This signal is usually trig- gered by the onset of a stimulus, e.g. a sound or a picture. By averaging numerous trials, the signal to noise ratio (SNR) can be increased suciently to make the individual components such as the N1 observable. This how- ever leads to the disadvantage that changes in brain responses are dicult to monitor across an experiment on short temporal scales. It is also not always convenient or even impracticable to design an experiment with enough trials for each condition.

One possible way to overcome these limitations is using the "Steady-State Response" (SSR), an oscillatory brain response driven by the modulation rate of a stimulus that is in some way periodically modulated. Galambos et al. [16] were the rst to discover this phenomenon in 1981 for the auditory modality called "The 40-Hz ERP phenomenon". This nding has triggered a great number of research, not only in the auditory domain but also in the visual (Müller et al. 1997 [27]) and somatosensory (Tobimatsu et al. 1999 [47]) modality.

In contrast to analysis of ERP-data which must be averaged across many tri- als and thus losing single-trial information, SSR-data can be averaged either trial by trial or for each trial alone. This permits the investigation of the time-course of the mean of all trials or averaging within each trial. While the rst approach sacrices the dierences between trials it is possible to analyze the time-course of the signal, the second approach leaves the between-trial information intact. This is because the signal of interest is present just once at the beginning of the transient response while the signal of a SSR remains as long as the driving stimulus is present.

(10)

The paper of Ross et al. (2005) [39] gives an example of this advantage. They presented amplitude-modulated tones with a length of 600ms and 2sec to the subjects. The rst 200ms beginning at the stimulus onset were excluded because the transient responses would have disturbed the analysis. Every 25ms epochs of 50ms were formed and averaged. So in the 600ms condition it was possible to average ((600200)/25)1 = 15 samples and in the 2sec condition even ((2000200)/25)1 = 71 samples.

1.1.1 The nature of the auditory Steady-State Response

The auditory Steady-State Response is a phenomenon that is not fully un- derstood to date. It seems widely accepted that it follows the timecourse of a periodic stimulus such as an amplitude modulated tone. (Galambos 1981 [16], Pantev 1996 [30], Gutschalk 1999 [17], Ross 2005 [39])

Galambos et al. [16] already found in 1981 that the amplitude and the latency of the aSSR depend on stimulus-intensity: The amplitude increased and the latency got shorter with higher intensity. Another nding was that the amplitude was highest at modulation-frequencies around 40Hz. They also tried to give an answer to the question how the aSSR is generated.

They argued that it is a composite of various middle-latency waves following a stimulus between 20ms and 50ms. Although this view is still shared in current research (Reyes 2005 [37]), ndings made by Pantev (1996) [30] which will be discussed later in this section conict with this view. There also exists one single-case study (Santarelli et al. 1999 [43]) that points out that there needs to be more than just middle-latency responses to get an aSSR.

Considering studies that tried to localize the generator of the aSSR it is possible to distinguish three main approaches. The rst tries to answere the question where the main source of the aSSR is located, comparing it e.g. with the source of ERP components. The second examines the lateralization of the aSSR. The third approach assumes more than one main-source modeled by a single equivalent dipole and tries to model the aSSR by multiple dipoles.

Pantev et al. (1996) [30] compared the sources of the Steady-State-Field (SSF), the N1m and the Pam, a middle latency component, for dierent

(11)

carrier-frequencies and found a tonotopic organization for each. While the tonotopic maps of the SSF and N1m had the same direction these were opposite to the map of the Pam. As these are the components that following Galambos et al. (1981) [16] sum up to result in the SSF there is the obvious conict that these two do not share the same tonotopic gradient.

Another important nding was that the sources of the SSF were 1 cm medial and 0.5 cm anterior to the sources of the N1m which was analyzed more deeply by Weisz et al. (2004) [49]. In this study three kinds of stimuli were used one pure tone to provoke the N1m, one 39Hz amplitude-modulated (AM) tone to provoke the SSF and another 39Hz AM-modulated tone which was shorter than the second one which was used in a combined paradigm.

Like in the study of Pantev et al. (1996) [30] tonotopic gradients for both the SSF and the N1m sharing the same direction but being situated at dierent locations were found. Interestingly, it was possible to reproduce this distinc- tion by using just the third paradigm leading to two dierent tonotopic maps that are produced by the same sounds. Both agree that the sources of the SSF are rather found in primary auditory areas more medial compared to those of the N1m component being located in non-primary areas.

This nding was later conrmed by Ross (2005) [39] who did a source- localization on the aSSR of amplitude-modulated tones. The main nding concerned the lateralization of the aSSR. They did monaural stimulation on each ear as well as binaural stimulation with 40Hz amplitude-modulated tone-burst with a carrier-frequency of 500Hz. The source strengths of the aSSR, the sustained eld, the N1m and P1m, and transient gamma-response in both hemispheres were analyzed and the corresponding laterality-index was calculated. The result was that the aSSR and the sustained eld are more lateralized to the right even when the stimulus is presented to the right ear.

Reyes et al. (2005) [37] point out that there is an ongoing debate on whether the generators are situated in cortical areas, subcortical areas or even in the brainstem. Trying to map the generators of the aSSR six sources were found that were common in both the Loretta-solution and the Minimum-Norm.

Four of these were located in the cortex while the other two were located in the brainstem. Although only the right ear was stimulated, four of the six

(12)

sources were found to be located in the right hemisphere. Furthermore, a higher amplitude for the sources in the right hemisphere compared to those in the left hemisphere was reported. Although this nding is in complete agree- ment with the studies of Ross et al. (2005) [39] and Schlee (2006) [45] Reyes et al. [37] report that literature-research revealed that those studies using a single dipole report a larger dipole-moment for the contralateral cortex.

These dierences are accounted to using a single dipole in contrast to using multiple soures while theoretical explanations are not given. They nally conclude that the aSSR comes from a number of generators that repeatedly excite each other starting in the right temporal lobe. They also admit the role of subcortical structures as the starting-point of the excitation.

Summarizing the ndings mentioned above it is possible to reach three con- clusions about the nature of the aSSR:

1. There exists a main generator of the aSSR that is assumed to lie in the primary auditory cortex and is dierent from the generator of the N1.

2. The amplitude of the aSSR is right lateralized.

3. Although a main-generator of the aSSR exists, several brain-regions are activated that are assumed to form a network.

1.1.2 Parameters of the Steady-State Response

Three basic parameters of the SSR can be analyzed.

1. Amplitude 2. Phase 3. Latency

(13)

Amplitude

To compute the amplitude and phase of any waveform a Fourier-transform is applied to the data. The absolute value of the complex numbers that are the result of the Fourier-transform denotes the amplitude at the given frequency.

The absolute value is calculated as follows:

abs=

q

real2 +imag2

In SSR paradigms the amplitude is the dimension of the magnitude of the signal. The meaning of a higher or lower amplitude is further discussed in the section discussing the scientic applications of the SSR.

Phase

The word "phase" is ambiguous in this context as it has two meanings when talking about waves. First, it means the current position in a cycle and second the static oset of a wave to another wave or a reference.

When talking about phase in the context of SSR only the second meaning actually makes sense because we are not interested in the current position of the activity forming the wave but in its oset to a virtual reference wave starting at the onset of the stimulus.

Phase-oset is measured in radians and is calculated from the complex num- ber of the Fourier-transform as follows:

angle =atan2(real, imag)

The phase is e.g. used to calculate the phase-coherence between two sites on the scalp, which is putatively related to the activation of networks.

(14)

Latency

Like "phase", latency can have two meanings. Concerning evoked signals, like ERPs, latency simply refers to the time between the stimulus-onset and the onset of the evoked response. Higher latencies may e.g. be accounted to slower processing-speed or more sophisticated processing of the stimulus.

Concerning waveforms latency can also be used to describe dierences be- tween the phase of two waves. As the aSSR is both an evoked and circular signal the use of the term "latency" needs to be claried beforehand.

1.1.3 Scientic Applications of the Steady-State Response

The Steady-State Response is used in a wide variety of research. This section tries to summarize some ndings in elds of research where the auditory and visual SSR have been applied.

In a recent paper by Picton et al. (2003) [33], the authors give a compre- hensive state-of-the-art review concerning methodological aspects of auditory SSR. Surprisingly, the discussion, whether and how the aSSR is aected by cognitive functions such as attention, expectancy or other variables like con- ditioning is limited to a very brief section. Indeed this reects the fact that in contrast to the visual domain not much is known about how dierent psychological factors inuence the auditory SSR.

Addressing the question whether the auditory and visual SSR are comparable seems rather dicult, as these two are completely dierent sensory subsys- tems and the stimuli used in those studies dier in qualities as modulation- frequency and meaningfulness just to name two. However Müller et al. (1997) [27] located the source of the visual SSR in primary visual areas leading to the assumption that ndings in the visual domain might give a hint in un- derstanding its auditory counterpart.

(15)

Attention

Since the negative ndings in the auditory domain of Linden et al. (1987) [23] who denied the eect of attention on the aSSR, much rened research has been conducted to nd out how the Steady-State Response is aected by attention.

More recent ndings in the auditory (Ross et al. 2004 [40]) and visual (Di Russo et al. 2002 [41]) domain support the view that the amplitude and phase of the SSR are aected by attention and other cognitive variables.

Modulation of the visual SSR by attention. During the last years, many studies conrmed that the amplitude of the visual SSR depends on the attention in "attend/ignore"-tasks (e.g. Chen et al. 2003 [5], di Russo et al.

2002 [41]) and selective attention-tasks (e.g. Ding et al. 2005 [13], Morgan et al. 1996 [26]).

Probably the rst to study selective attention using the visual SSR was Mor- gan (1996) [26]. Two ickering patterns of letters with an occasional occur- ring digit were presented to the subjects' left and right visual eld. The task was to react on the digit if it was presented in the visual eld indicated by an arrow at the start of each trial. He already used a technique that was later called "frequency-tagging": One of the patterns had a icker-frequency of 12 Hz while the other was presented at 8.6 Hz making it possible to distinguish the attended from the unattended side of the visual eld. The conclusion was that selective attention increases the amplitude of the SSR at all recorded locations although the highest increase was observed at the right hemisphere.

In comparison with Silberstein et al. (1990) [46] who showed that the ampli- tude of the visual SSR decreases with higher vigilance to a task this seems contradictory. Morgan (1996) [26] argues that this may be due to the fact that in Silberstein's study [46] the whole screen ickered and the stimulus that was to be attended was very small in comparison while Morgan's [26]

stimuli were bigger and were the only items ickering on the entire screen.

This implies that in the Silberstein study [46] the visual SSR could have been suppressed because the subjects focused their attention on the small stimuli thus masking the ickering background.

(16)

This conclusion is supported by the ndings of Chen et al. (2003) [5]. They superimposed a horizontal and a vertical grating, one red and the other green.

Each grating was made of seven bars. In each trial one of them was presented with a icker-frequency of 7,41 Hz while the other one was presented with 8.33 Hz, thus again using "frequency-tagging" to distinguish between the two stimuli. From time to time one bar of each image increased or decreased its width by 20% for about 0.4 seconds. Before each trial subjects were given instructions which orientation (horizontal or vertical) to attend and whether to attend only the one bar in the middle of the grating or the tree middle bars. When the subjects had to pay attention for a wider area in the 3-bar condition, the amplitude of the visual SSR for the attended grating increased while the opposite was found for the condition when the area to pay attention to was smaller in the 1-bar condition.

These ndings conrm that the amplitude of the visual SSR is up and down modulated by attention depending on several factors like selectivity of the stimuli.

Di Russo et al. (2002) [41] took one further step in understanding the link between attention and the visual SSR by putting an emphasis on the latency arguing that a shortened latency may be interpreted as higher priority on early stages of processing. They compared the amplitude and latency of the visual SSR for three conditions using sinusoidal gratings ashing at 6-10Hz.

In the rst condition subjects had to attend to changes of the frequency, the second consisted of a visual search task and the third was just passive viewing. In accordance with the other studies an increase of amplitude in the attend condition and decreased latencies in the attend-condition were found.

Modulation of the auditory SSR by attention. The rst to publish an article about the modulation of the aSSR in humans were Linden et al.

(1987) [23]. The article features the remarkable amount of 4 experiments each with a dierent paradigm for assessing the dependence of the aSSR on attention.

In the rst experiment stimuli with varying intensity were presented to the subjects. In the attend condition the subjects had to count the changes

(17)

in intensity while they read a book in the ignore condition. The aSSR was recorded via EEG at Cz referenced to left mastoid. The result was an increase of the amplitude and a decrease of phase with increasing intensity but no eect of condition was found although this was found for the ERPs. The second experiment used a dichotic-listening task. One ear of the subjects was stimulated with a carrier-frequency of either 500Hz or 1000Hz and a modulation-frequency of either 37Hz or 41Hz. The task was to count the occurrence of changes of the carrier-frequency to either 535Hz or 1050Hz in just one ear. This time the aSSR was recorded at three sites: Cz, Fz, and Pz relative to the left mastoid. Again, an eect was found for the ERPs but not for the aSSR. The third experiment was similar to the second one except that there was another condition in which subjects read a book.

Again, a higher amplitude in the attend-condition compared to the ignore- condition was observed only considering the ERPs and not the aSSR. The fourth experiment again used the same paradigm as the second one but this time averaging the signal over time. This did not lead to new conclusions that had not been observed in the other experiments.

17 years later Ross et al. (2004) [40] made use of advanced technology in recording and analyzing magnetic activity on the skull-surface using whole- head MEG to answer the question whether the aSSR is in fact modulated by selective attention was seen as a vital question. Using an amplitude modu- lated tone with a carrier-frequency of 500Hz and a modulation-frequency of 40Hz subjects attended either to an irrelevant slide show of landscapes etc.

or to a change in the modulation-frequency to 30Hz. This resulted in an in- crease in amplitude for the aSSR in the attended-condition that is signicant for almost the complete time-course.

These two ndings seem contradictory. Ross (2004) [40] argues that this may come from the dierences in the tasks that were used. Linden (1987) [23] used tasks that focused on the intensity or carrier-frequency of the tones while Ross et al. (2004) [40] used the modulation-frequency as target which would be more bound to the generation of the aSSR. He furthermore argues that the single-channel EEG-recording of Linden [23] may not be sucient to get a clear aSSR as they only recorded Cz (and once also Pz and Fz) while Ross (2004) [40] and other studies (Ross (2005) [39], Pantev et al. (1996)

(18)

[30], Weisz et al. (2004) [49]) found the aSSR to be localized in the auditory cortex.

These ndings lead to the conclusion that the auditory steady-state response can be modulated by attention although care has to be taken on the design of the experiment as contradictory ndings were made. Another remaining question is whether the latency of phase of the aSSR can be modulated by attention as this has been already shown for the visual counterpart.

Further scientic applications

As already mentioned above, there has not been much research considering other specic cognitive functions, like expectancy, motivation or control, than attention. Nonetheless, the steady-state response has been used to study other aspects of human behavior or cognition. This section will give a brief overview of this.

Emotional valence and the visual steady-state response. Kemp et al. (2002) [22] conducted a study using three sets from the International Aective Picture System (IAPS) varying by valence (neutral, positive, neg- ative). Using a 13Hz peripheral icker a great increase of the amplitude in the negative-valence condition was found for left temporoparietal, posterior frontal, and right anterior temporal regions. For the positive-valence condi- tion an increase in amplitude in frontal regions and a decrease in occipital regions was found. While giving an interpretation for the decrease of the am- plitude by accounting it to event related desynchronization (ERD) and thus to the involvement of a larger neuronal network according to Pfurtscheller &

da Silva (1999) [32], the increase was not interpreted.

Keil et al. (2003) [21] did a similar experiment with a slightly dierent methodological approach. They used a icker-frequency of 10Hz and had the entire stimulus icker.. Their results again were similar to those of Kemp et al. (2002) [22] although no decrease of amplitude is reported. Analysis of the phase of the visual SSR revealed a modulation of phase-delay as a function of emotional arousal.

(19)

Tinnitus In his recent diploma-thesis Schlee (2006) [45] uses the aSSR to examine the characteristic responses of the aSSR in subjects suering from tinnitus. Although he was not able to show a dierence in the steady-state amplitude between tinnitus-patients and controls, he used phase-coherences to study functional coupling between distinct brain regions between dierent sources located throughout the brain. Subjects listen to three dierent tones while recording MEG. The tones were of increasing frequency of which the highest was located approximately at the frequency where the amount of hearing-loss increased maximally in the tinnitus-patients. Some remarkable ndings were made using the paradigm of analyzing the phase-coherence.

These included the right parietal lobe being discussed as a possible "relay- station" as it has the most and strongest connections to other sources. Fur- thermore, the phase-coherence between the right-parietal source and the fron- tocentral source shows an almost perfect correlation with tinnitus-distress.

1.1.4 Clinical Applications for the Steady-State Response

As the basic methodology is relatively simple and the SNR is favorable, the SSR oers also intersting possibilities of clinical applications.

As the scope of this diploma-thesis is not the clinical application of the SSR, I will only give a brief overview.

Clinical Applications for the aSSR

The review of Picton et al. (2003) [33] includes an overview of clinical appli- cations of the aSSR. The most detailed review is given concerning the use of the aSSR as an objective measurement for audiometry which was later again evaluated by Canale et al. (2006) [4]. Both studies conclude that the aSSR is an eective method to measure objective audiograms, although Picton et al. (2003) [33] goes in much more detail by also discussing bone-conduction, audiometry at 40Hz and 80Hz etc.

(20)

Picton et al. (2003) [33] also report ndings in the elds of neuropsychology and anesthesia. While the aSSR does not seem to be a reliable predictor of neuropsychological diseases, it seems to be a good indicator for unconscious- ness during anesthesia preventing unintentional interoperative awareness. It is not stated whether this is used in today's clinical practice.

1.2 Top-Down versus Bottom-Up contribution to Perception

The perception of sensory input to the brain surely depends on "what actually comes in", i.e. measurable physical properties of the stimulus like intensity.

Input is received by the receptors (photoreceptors in the eye, hair cells in the cochlear, cells being sensitive for touch, pain, temperature etc. on the skin), passes the brainstem, feeds into the thalamus and eventually reaches the primary sensory areas in the brain. From there the signals are analyzed by secondary systems and eventually converge in polymodal areas of the brain forming the actual percept. This process is commonly referred to as

"bottom-up processing" as the signal comes in at a low stage of processing and is further rened and analyzed so the signal at higher stages is dependend on input from lower stages.

Top-down processing refers to the opposite principle: Information from a higher stage of processing is used to actually modulate new input at lower stages. This principle is found quite often in the brain. One example is the attenuation or amplication of sounds in the cochlea made by the outer hair cells, which receive input from the perioliviar nucleus.

While bottom-up processing e.g. enables the individual to rapidly shift atten- tion to a salient feature of potential importance, top-down processing enables cognition to alter these processes because the individual may be hungry and thus looking for a colored spot which may be food (Connor et al. 2004 [7]).

The common synonyms, "stimulus driven" for bottom-up and "goal directed"

for top-down, are good descriptions for these two dierent ways of modulat-

(21)

ing processing in the brain. Nonetheless care has to be taken not to see them as independent but overlapping principles that optimize e.g. attentional per- formance (Sarter et al. 2001 [44]).

This section will give a summary of the research with a focus on top-down processes and the auditory domain.

1.2.1 Attention

The role of bottom-up versus top-down processes in attention has been the topic of many recent studies (e.g. Debener et al. 2002 [9], Debener et al.

2003 [10], Hamker 2006 (in press) [18], Ogawa & Hidehiko 2006 [28]) and reviews (e.g. Deco & Tolls 2005 [11], Egeth & Yantis 1997 [14], Sarter et al.

2001 [44]).

Debener et al. (2002) [9] conducted a study where they tried to distinguish top-down and bottom-up processing by using two components of the auditory event-related potentials, namely the "novelty P3", detectable at about 300ms post stimulus for bottom-up and the "target P3", detectable at about 400- 600ms post-stimulus for top-down. They used a paradigm called "Novelty Oddball". In this paradigm three stimuli are used: one frequent tone, one infrequent tone and several "novel" sounds. While the EEG is recorded subjects hear the frequent tone 80%, the infrequent tone 10% and one of the "novel" sounds 10% of the time. The task for the subjects is to count the occurence of the infrequent tone. The infrequent tone should therefore elicite the "target P3" while the novel sounds should elicite the "novelty P3". Therefore, the infrequent tone served as target-stimulus for the subject requiring him to actively pay attention to it. As attention is directed to the stimulus by cognitive processes from higher levels (the motivation to detect the infrequent tone) this reects top-down modulation. The opposite is true for the "novel" sounds. The subject does not actively pay attention to these sounds and does not expect them. As they are also very dierent from the tones (in the experiment the tones were pure sine-tones and the sounds were environmental sounds belonging to categories as "animals" or "machine") the subject will nonetheless detect them as odd and automatically draw attention

(22)

towards them. As attention is automatically directed to the stimulus by the stimulus itself (or better: by low processes) and not by higher level cognitive processes attention this reects bottom-up modulation. Debener et al. (2002) [9] conducted two recording sessions each consisting of two blocks, separated by one week. They found that the "novelty P3" showed habituation within one session while the "target P3" did not. The "target P3" however showed a signicant decrease in the second session compared to the rst while this was not shown for the "novelty P3". Although they admit several shortcomings of their design they conclude that in a novelty oddball design it is possible to distinguish between top-down and bottom-up processes by analyzing the

"novelty P3" versus the "target P3".

One year later, Debener et al. (2003) [10] used the same paradigm to show that top-down attentional processing enhances gamma-band activity at about 40Hz. They showed a signicant enhancement of gamma-band ac- tivity for the target-stimulus compared to the standard whereas the activity remained the same for the novel stimulus.

1.2.2 Expectancy

Although attention is by far the most studied aspect of top-down modulation of behavior and neural correlates, there has at least been some research about expecting a positive or negative reward (e.g. Breiter et al. 2001 [3]) and expectancy that should lead directed attention (e.g. Kastner et al. 1999 [20]).

In 1985 Perruchet [31] studied the role of expectancy on conditioning. In his experiment subjects heard a tone and were told in advance that in 50% of the trials this tone would be followed by a short pu of nitrogen to the eye that elicits an eye-blink response. In terms of Pavlovian conditioning, 50% of the trials had an US (air-pu) - CS (the tone) contingency while the other 50%

did not. The subject's task was to rate the likelihood of the US to appear.

Without the subjects knowing the contingency was organized in runs. One run consisted of 1 to 4 consecutive CS-US pairings or CS alone. Referring to the phenomenon called "gambler's-fallacy" (Ayton & Fischer 2004 [2]) he

(23)

made two speculations what could happen to the propability of the CR (the eye-blink response) in dependence of how often a pairing or no pairing would be presented in a row.

1. Propability of the CR increases with many consecutive pairings and decreases with many consecutive non-pairings.

2. Propability of the CR decreases with many consecutive pairings and increases with many consecutive non-pairings.

While the rst hypothesis is derived from strength-theory, i.e. the better the contingency the more likely the CR, the second is derived from expectancy- theory, i.e. the expectation of the subjects that after a certain number of consecutive trials it should nally change because overall probability is 50%.

In cognitive psychology this logical fallacy - i.e. the belief that events in the past aect random events - is called gambler's fallacy (for a recent discussion see Ayton & Fischer 2004 [2]).

For example: You toss a fair coin three times and it happens to come out three times "tail". When you now ask someone what would be the most likely result the next time you toss, he will most likely say "heads" because if the coin was fair the chances of getting "tail" four times in a row a minimal (Ayton & Fischer 2004 [2]). Of course, this is the wrong answer as every round is independent from the previous.

For the expectancy-rating only one hypothesis was made stating that accord- ing to expectancy-theory the rating should decrease with more consecutive paired trials and increase with more consecutive non-paired ones. The results conrmed the strength-theory-hypothesis for the probability of CR and the expectancy-theory-hypothesis for the expectancy-rating.

Although Perruchet does not discuss his ndings with the view on top-down versus bottom-up modulation it seems worth doing so. The ndings of Per- ruchet seem ambiguous at rst view as we nd a decrease of one behavioral parameter and an increase of the other. Considering the nature of these responses, it becomes clear that there is a distinction to be drawn.

(24)

While the conditioned eye-blink response is automatic and not aected by higher cognitive processes as intention, the opposite is true for the expectancy- rating, which is an intentional response that is actually made by cognitive processes as expectancy, intention and experience. With reference to the in- troduction of top-down and bottom-up processing using the synonyms "goal- directed" and "stimulus-driven" further gives a distinction between the two processes of the experiment. A conditioned response is without doubt driven only by the stimulus while the expectancy-rating is directed by the goal of delivering an adequate forecast of the next event. Therefore, we can separate Perruchet's results to one modelling a bottom-up processing and one mod- elling a top-down processing, which can and should be interpreted for their own.

1.3 Summary

In this chapter, I reviewed current literature concerning the steady-state response with an emphasis on the auditory domain and top-down versus bottom-up processing.

The aSSR was explained in terms of its nature, its parameters and appli- cations in science and clinical application showing that it has advantages over transient evoked responses in neuroscience by delivering a better SNR with fewer trials. It was also shown that the parameters of the aSSR are modulated by cognitive processes as attention and emotional valence.

The role of top-down processes was also made clear and ended with a discus- sion of a study by Perruchet (1985) [31] that was able to segregate top-down and bottom-up processes in a pavlovian conditioning experiment.

As it is shown, there has not been controlled research studying the inuence of top-down processes other than attention on the aSSR. The experiment by Perruchet (1985) [31] seems a good point to start. The main shortcoming of this paradigm concerning the separate study of top-down inuences on the aSSR is that it still includes bottom-up processing that also might have an inuence on the neurophysiological response. Although it is clear that

(25)

it is impossible to eliminate bottom-up inuences they can be decreased signicantly.

The following experiment implements a design that is derived from the work of Perruchet (1985) [31] but tries to emphasize top-down processes and to decrease bottom-up inuences. This is done by changing the experimental design of Perruchet in several ways:

1. As the eye-blink conditioning clearly is a bottom-up process it will not be used.

2. Instead of rating the likelihood of the air-pu the behavioral task will deal with the tone.

3. In contrast to Perruchet who told the subjects that they were about to hear the same tone all the time, subjects will be told that there would be two dierent tones they would have to discriminate, although there will be one tone for the whole session.

4. Feedback will be given by an auditory signal. This feedback will be pseudorandomly organized in runs according to the design of Perruchet.

5. In Perruchet's design subjects rated the likelihood of the air-pu before the corresponding tone started. This will be changed to a procedure where the subjects are required to do the rating in the last seconds of the presentation of the stimulus because the subjects are told that the rating corresponds to the tone.

6. The aSSR will be recorded through EEG. Therefore, the stimulus will be an amplitude-modulated tone.

By doing this, it is sure that the behavioral response as well as the aSSR only depends on what the subject infers from the pseudo-randomly given feedback.

(26)

1.4 Aims of the current study

The following aims were pursued with this study:

1. Try to reproduce the results of Perruchet (1985) [31]. More consecutive feedbacks of having heard the higher tone leads to a higher propability to rate the next tone as being low. This is important to ensure the validity of the study.

2. Examine if the auditory responses as the ERP and the amplitude of the aSSR are modulated by string-length in the auditory areas. Especially the studies by Debener et al. (2003) [10] and Ross et al. (2004) [40] fos- ter the expectation of an increase in amplitude with higher importance for the subjects. Because Ross et al. (2004) [40] found this increase in primary auditory sources I expect a signicant change in the same area.

3. Investigate whether changes in the ERP or aSSR can be observed in other regions of the brain like those being regarded as crucial for at- tention or expectancy (e.g. frontal regions).

4. Following the work of Schlee (2006) [45] it is also reasonable to examine whether a modulation of phase-coherences between various areas can be found for increasing string-lengths. This becomes even more important as Schlee (2006) [45] as well as Weisz et al. (2004) [50] talk about tinnitus as an at least partial top-down phenomenon.

(27)

Chapter 2

Material and Methods

2.1 Subjects

Sixteen healthy individuals (ve men; mean age and standard deviation:

22.3 ± 2.348) who reported normal hearing took part in this experiment.

All participants were right-handed according to the Edinburgh Handedness Inventory (Oldeld 1971) [29]. For a summary of subjects see table 2.1.

Two subjects had to be rejected because of too many artifacts in the EEG- data. Two further subjects were rejected because a binomial-test on the behavioral data revealed that they hit one of the buttons almost constantly (See table 2.2). This left 12 subjects for the analysis of the data (4 men;

mean age and standard-deviation: 22.25 ± 2.49).

(28)

Subject Age Sex LQ

1 23 f 90

2 21 f 100

3 19 f 100

4 21 f 90

5 24 f 100

6 22 m 100

7 24 f 90

8 27 f 100

9 22 f 100

10 20 m 90

11 25 m 100

12 20 m 100

Table 2.1: Subjects participating in the experiment. Lateralization Quotient (LQ) measured with the Edinburgh Handedness Inventory (Oldeld 1971) [29]. LQ=0 means completely left-handed; LQ=100 means completely right- handed.

Subject Age Sex p(high) p

13 35 m 0.08 <0.001

14 32 f 0.99 <0.001

Table 2.2: Rejected Subjects because of behavioral data

(29)

2.2 The Experiment

2.2.1 Design

Although the subjects were told that they participated in a frequency-discrimination- experiment where they had to discriminate between two tones, there was only

one tone in the real task that never changed.

Therefore, the subjects heard the same tone for the whole task but were given random feedback, whether they had just heard the high or the low tone. Feedback consisted of a loud noise for the high tone or silence for the low tone.

The feedback was not actually totally random but was organized in strings following Clark et. al.[6].

String Length

High Low

1 18 18

2 9 9

3 6 6

4 4 4

Table 2.3: Number of Strings for the two conditions and the 4 possible string- lengths

String length is thus dened as the number of equal feedbacks (high/low) in a row. Figure 2.1 shows an example of three runs consisting of the string length 2, 3 and 1.

(30)

Figure 2.1: This gure shows a possible course of the experiment to clarify the concept of string length. Only the feedbacks are depicted here. In this gure there is two times feedback for the high tone, followed by three times feedback for the low tone and one feedback for the high tone. So the corresponding string lengths are 2 3 1.

Altogether, the real task consisted of 140 trials.

2.2.2 Stimuli

Two kinds of stimuli were needed for the experiment.

1. Three steady-state tones.

Two are of dierent frequencies and are needed for the practice.

The frequency of the third lies in the middle of the two practice- tones. Additionally it is mixed with a high bandpass noise.

2. One bandpass noise for the feedback.

All Stimuli were created using a matlab-script.

(31)

Steady-State Tone

Three Steady-State Tones each lasting 10 seconds were used for the experi- ment. For details refer to table 2.4.

Task Carrier-

Frequency

Modulation- Frequency

Noise

Practice Low 975Hz 39.0625Hz NO

Practice High 1025Hz 39.0625Hz NO

Real 1000Hz 39.0625Hz YES

Table 2.4: Carrier-Frequencies, Modulation-Frequencies and whether noise was added for the 3 Steady-State Tones

The third stimulus was composed of the steady-state tone and narrow-band noise (7000-10000Hz) at 3dB above the peak-amplitude of the steady-state tone.

The steady-state tones were created by rst generating a pure tone (puretone) with a maximum amplitude "peakamp" and the carrier-frequency "f". Then the envelope for the amplitude-modulation (am) envelope was generated with the modulation-frequency "ampf". Finally, the two resulting vectors were multiplied to form the steady-state tone. The following functions were used for the generation:

puretone(x) =peakamp∗sin(2∗Π∗f ∗x/44100) am(x) =sin(2∗Π(ampf /2)∗x/44100)

sst(x) =puretone(x)∗am(x)

Noise was created by using random values multiplied by the amplitude. Af- terwards the noise was rst low-pass ltered and then high-pass ltered and added to the tone.

(32)

To simplify the combination of the steady-state tone and the noise will be called "auditory stimulus" for the rest of this diploma-thesis.

The feedback-noise

The noise was generated using random numbers. Afterwards it was bandpass ltered (3000-6000Hz). The duration was 1 second.

For the loudness-tuning the subjects were instructed that they would hear a noise by clicking on a certain button on the screen. Two other buttons controled the loudness. They were asked to make the noise as loud as possible.

It should be uncomfortable but not painful.

2.2.3 Procedure

After giving written consent, the subjects were seated in the EEG-cabin and were prepared for the EEG-recording.

After the preparations were nished, the subjects were told by the inves- tigator that it was the purpose of this experiment to assess the ability of normal-hearing people to discriminate two tones while being disturbed by a noise.

The subjects were told that after a few preparations they would have to solve the real task that consisted of hearing one of the two noise-disturbed tones for about 6 seconds and then having 2 seconds to choose which tone they heard. The subjects had to report this by clicking one of two buttons on the screen. Feedback would be given so that the higher tone was always followed by a loud noise while the lower one was followed by silence. The investigator also tried to make clear that the two tones are chosen to make the task rather easy when there is no noise but rather hard when the tones were played together with the noise.

(33)

Then the investigator explained the procedure to the subject:

1. Have the subject "tune" the loudness of the "feedback-noise".

2. Passively listen to the two tones for 3 times each.

3. Practice the task without the disturbing noise.

4. Do the real task.

Getting familiar with the tones

The Subjects then passively listened to the two practice-tones (975Hz and 1025Hz without noise) 3 times. They were told that this was for "`getting to know the tones"`.

The Practice

The practice was like a short version (only 10 trials) of the real task (2.2) except that there really were two dierent tones with consistent feedback.

Although it was not easy for some subjects to discriminate these tones, all of them managed the task after about 5 trials without problems.

The real task

The design of the real-task is shown in gure 2.2.

First the auditory stimulus was presented concurrently. After six seconds, the subject was asked which tone he heard while the auditory stimulus was still played. After 2 seconds the buttons disappeared, the auditory stimulus stopped and the feedback-noise was played in 50% of the trials.

(34)

Figure 2.2: Timeline of the task

2.2.4 Stimulus Presentation and Data Acquisition

Stimulus Presentation and Response Acquisition

Stimulus-presentation, presentation of the instructions and rating-scheme and behavioral response acquisition were done using Psyscope X [24] run- ning on an Apple Macintosh iBook (900MHz PowerPC G3) using Mac OS X version 10.3.9.

The instructions and the rating-scheme were shown on a 17 inch CRT- monitor situated about 1.5 meters from the subject.

Auditory stimulation was delivered to the left ear through stereo-headphones (Sennheiser HD pro 180).

To respond subjects used a mouse to move the cursor to the corresponding button on the screen and press the mouse button to deliver the response.

(35)

EEG-data recording

EEG recording was carried out in a dimly lit, sound-attenuated room us- ing a 64-channel EEG-system (Neuroscan Synamps). Sampling-rate was 250Hz. Online ltering with a low-pass of 100Hz and DC ltering was ap- plied. Impedance between the electrodes and the skull was kept under 5kΩ.

After data-acquisition the locations of the electrodes was recorded using a 3Space Isotrak II system (Polhemus) to be able to do source-projection af- terwards.

2.3 Data Analysis and Statistical Analysis

In the description of the experimental design the tone was related to the feedback and string length that came after the tone as this was the way it was presented to the subjects. As the scope of this experiment is to examin the inuence of this feedback and string length on the behavioral and electro- physiological response this is changed for the analysis. The behavioral and electrophysiological response to a tone is now related to the feedback and string length before the tone. For example the response to high feedback of string length four means that the previous tone was followed by the fourth time in a row of loud noise.

(36)

2.3.1 Behavioral Data

Data-extraction and statistical analysis were done using R [36].

A two way ANOVA was performed on the data using a linear-mixed-eect model (Pinheiro & Bates, 2000 [34]) . This model allows the specication of classical xed-eects and introduces random-eects, which take into account that the data is not taken from the whole population but only from a sample of it.

In this case, the xed eects were condition and string-length. The random- eect was the variance that could be contributed to the sample of the sub- jects.

2.3.2 EEG-data

Analyzing the EEG-data began with a visual inspection of the data to nd bad channels which were interpolated in BESA (MEGIS GmbH) using spline interpolation.

The data was then artifact-corrected using the "Surrogate method" (Berg &

Scherg 1994), rereferenced to average reference and projected on a source- montage with 8 sources using BESA (MEGIS GmbH). The source-montage used in this work was the same used in the diploma-thesis of Winfried Schlee (2006) [45]. An overview is provided in gure 2.3. For details see the ap- pendix. The three orientations of the source-projection were combined to one by calculating the maximum PCA-component of the three orientations.

Afterwards the data was exported to EEGLAB [12]. First, the data was bandpass-ltered. Cut-o frequencies diered whether event-related poten- tials or the aSSR was analyzed, so the lter-settings will be detailed in the sections describing the individual methods. Afterwards the data was grouped into epochs reaching from 1 second pre-stimulus to 5 seconds post-stimulus and the mean of the pre-stimulus interval was subtracted. Epochs were grouped by two factors the rst being the kind of feedback (High vs. Low) the second the number of consecutive trials with the same feedback.

(37)

Figure 2.3: An illustration of the source-montage used in this study. (After Schlee (2006) [45])

Event related Potentials

For ERP-analysis, the string lengths 24 were condensed to keep the number of trials on a level allowing for a proper ERP-analysis as high number of trials is needed to decrease the signal to noise ration. The data was bandpass ltered (HP: 1Hz, LP: 30Hz). Afterwards the mean of the groups for each subject was calculated and exported to R [36] for further analysis.

The amplitude and latency of the N100 was calculated because it was the most prominent component and observable in all subjects. The N100 was dened as the most negative sample in the time-range between 48 and 180ms.

A two-way ANOVA was performed on the amplitude and latency for each source using a linear-mixed-eect model. "Condition" and "Length" were used as xed factors while "Subject" was used as random-factor. "Condi- tion" means the kind of feedback (High or Low)."Length" refers to the string length.

Steady-State Analysis

Data was bandpass ltered (HP: 30Hz, LP: 50Hz) and separated into epochs.

Afterwards a moving-average was applied to each trial that works in the fol- lowing way: Each epoch was cut into numerous frames consisting of four cy- cles. Such a frame consisted of 25 data-points and was 100ms long. To elim- inate the eect of the transient response at stimulus-onset, the rst 750ms

(38)

of the data were discarded. The second frame started one cycle (25ms) after the rst, the third another cycle later and so on. The resulting 201 cycles were then averaged. This condensed the trial of six seconds into an averaged frame of 100ms.

A Fast-Fourier-Transform (FFT) was then computed on the condensed data to extract the amplitude and phase at the modulation-frequency.

Steady-State Amplitude First, the data was normalized in the following way: For each subject the mean amplitude over all conditions and sources was calculated. The amplitudes of each subject were then divided by the individual mean. The data was then exported to R [36] for statistical analysis.

The mean amplitude of each group was calculated and a three-way ANOVA was computed on the data again using a linear-mixed-eect model, using the factors "Source", "Condition", "Length" as xed eects and the factor

"Subject" as random eect. "Source" refers to the 8 sources of the source- montage. "Condition" and "Length" are dened according to the denition in the ERP-analysis section.

For a rened analysis, the 8 sources were analyzed separately this time using a two way (Condition * Length) ANOVA using the same method as stated above.

Source Coherence The Rayleigh Test of Uniformity was applied to the phase-dierences using PhasePACK package for Matlab (Rizutto 2004 [38]).

Therefore, the phase-dierences between the 8 sources for each condition were calculated. The mean-length served as test statistic. The null hypothesis of a random distribution around the circle was tested against the alternative hypothesis of a uniform distribution. In the case of a perfect uniform distri- bution, the mean resulting length equals one. The resulting data was then exported to R [36] for further statistical analysis.

Fisher's z-transform was applied to the data. Each combination of sources was analyzed separately with a three-way ANOVA using a linear-mixed-eect model statistic. The xed eects were "Condition" and "Length", the ran- dom eect "Subject". These were dened as in the analysis of the aSSR.

(39)

Chapter 3 Results

3.1 Behavioral Data

Figure 3.1 shows the results of the behavioral data. It shows that for a string length of one the subjects expected the same tone to follow on the next trial. For the string lengths two and three there is a trend that the subjects' expectation was at about 50% for either of the tones to follow. For the string length of four subjects expected the other tone to appear.

The results of the linear-mixed-eect-model Statistic are shown in table 3.1 The analysis revealed a signicant eect for the "String Length * Condition"

interaction. Neither of the main-eects was signicant.

(40)

0.00.20.40.60.81.0

Previous Position in String

Response

1 2 3 4

Condition Silence Noise Interaction with correction

Figure 3.1: Plot showing the behavioral Response depending on the String Length and Condition. Higher values on the y-axis mean the subject rated the current tone as the high tone.

(41)

numDF denDF F-value p-value

(Intercept) 1 88 259.60860 <.0001

String Length 1 88 0.02978 0.8634

Condition 1 88 0.77259 0.3818

String Length * Condition 1 88 5.48285 0.0215 Table 3.1: Linear Mixed-Eect-Model Statistic for the behavioral data.

Numerator degrees of freedom (numDF), denumerator degrees of freedom (denDF), F- and p-values are given for the full model.

3.2 Event Related Potentials

To validate the data, the topography was plotted for all channels at the time where the N100 reached its maximum (110ms post-stimulus). This plot (gure 3.2) shows a large negativation at central electrodes.

Figure 3.2: Topography of the N100 ERP-component.

(42)

3.2.1 Amplitude

The ANOVA revealed an eect for the interaction "Condition * Length" on the amplitude of the right frontal source (see table 3.2). Figure 3.3 shows that the amplitude for the "high" condition is larger than that of the "low"

condition for the string length of one. This results in a large dierence of the amplitude for this string length. In contrast for the group of string lengths higher than one the amplitude of the "low" condition is larger than that of the "high" condition resulting in a smaller dierence. None of the other sources showed a signicant eect.

numDF denDF F-value p-value

(Intercept) 1 33 36.939 <.0001

Condition 1 33 1.630 0.2107

Length 1 33 1.008 0.3226

Length * Condition 1 33 7.153 0.0116

Table 3.2: Linear Mixed-Eect-Model Statistic for the N100-amplitude of the right frontal source. Numerator degrees of freedom (numDF), denumerator degrees of freedom (denDF), F- and p-values are given for the full model.

(43)

−14−16−18−20

Length

Amplitude

1 2+

Condition HighLow

Figure 3.3: Interaction-plots showing the amplitude for both feedback- conditions (High/Low) and String Lengths for the right frontal source.

Errorbars show standard-errors. The y-axis is inverted to show higher am- plitudes on the top.

(44)

3.2.2 Latency

The ANOVA revealed an eect for "Condition" on the latency of the anterior cingulum source (see table 3.3). Figure 3.4) shows a consistent higher latency for the "low" condition. In contrast to the latency of the "low" condition which remains constant over all string lengths that of the "high" conditions further declines with increased string lengths. None of the other sources showed a signicant eect.

numDF denDF F-value p-value

(Intercept) 1 33 239.823 <.0001

Condition 1 33 4.860 0.0346

Length 1 33 0.220 0.6421

Length * Condition 1 33 0.266 0.6093

Table 3.3: Linear Mixed-Eect-Model Statistic for the N100-latency of the anterior cingulum source. Numerator degrees of freedom (numDF), denumerator degrees of freedom (denDF), F- and p-values are given for the full model.

(45)

859095100105110

Length

Latency

1 2+

Condition Low High

Figure 3.4: Interaction-plots showing the latency for both feedback- conditions (High/Low) and the String Lengths for the anterior cingulum source. Errorbars show standard-errors.

(46)

3.3 Auditory Steady-State Response

Figure 3.5 depicts the topography of the aSSR at four points in time showing the reverse of polarity of the steady-state eld. The intervals correspond to the modulation-frequency.

9 ms 21 ms 34 ms 46 ms

−0.1 0 0.1

Figure 3.5: Topography of the aSSR at 4 points in time chosen to match the modulation-frequency.

3.3.1 Amplitude

The three-way-ANOVA revealed no signicant eect except for the factor

"Source" (see table 3.4). Analyzing the sources separately reveals signicant eects for the anterior cingulum (see table 3.5) while the left-frontal source does not show signicant eects although the p-values being only slightly larger than 0.05 imply a trend. (see table 3.6) Other sources do not show any signicant eect.

Figure 3.6 shows that the dierence between the two conditions remains small for the rst one to three consecutive equal feedbacks and suddenly increases dramatically when the subject had got the same feedback four times in a row.

For both sources, the condition "High" which had a loud noise as feedback produced a smaller amplitude than the condition "Low" where no sound was played as feedback.

Eects for the both temporal sources were not found and were far from signicant for both main-eects as well as for the interaction (tables 3.7 and 3.8)

(47)

numDF denDF F-value p-value

(Intercept) 1 749 4409.041 <.0001

Length 1 749 0.383 0.5360

Source 1 749 236.473 <.0001

Condition 1 749 0.665 0.4150

Length * Source 1 749 0.008 0.9285

Length * Condition 1 749 1.712 0.1911

Source * Condition 1 749 0.001 0.9710

Length * Source * Condition 1 749 0.088 0.7666 Table 3.4: Linear Mixed-Eect-Model Statistic for the amplitude of the aSSR. Numerator degrees of freedom (numDF), denumerator degrees of free- dom (denDF), F- and p-values are given for the full model.

numDF denDF F-value p-value

(Intercept) 1 81 286.149 <.0001

Length 1 81 0.48942 0.4862

Condition 1 81 6.865 0.0105

Length * Condition 1 81 5.667 0.0196

Table 3.5: Linear Mixed-Eect-Model Statistic for the amplitude of the aSSR measured at the anterior cingulum-source. Numerator degrees of freedom (numDF), denumerator degrees of freedom (denDF), F- and p- values are given for the full model.

(48)

numDF denDF F-value p-value

(Intercept) 1 81 286.862 <.0001

Length 1 81 0.5137 0.4756

Condition 1 81 3.716 0.0574

Length * Condition 1 81 3.777 0.0554

Table 3.6: Linear Mixed-Eect-Model Statistic for the amplitude of the aSSR measured at the left-frontal source. Numerator degrees of free- dom (numDF), denumerator degrees of freedom (denDF), F- and p-values are given for the full model.

numDF denDF F-value p-value

(Intercept) 1 81 134.955 <.0001

Length 1 81 0.199 0.6569

Condition 1 81 1.260 0.2650

Length * Condition 1 81 0.474 0.4932

Table 3.7: Linear Mixed-Eect-Model Statistic for the amplitude of the aSSR measured at the left-temporal source. Numerator degrees of freedom (numDF), denumerator degrees of freedom (denDF), F- and p-values are given for the full model.

(49)

1.201.251.301.351.401.451.50

Number of consecutive equal responses

Amplitude

1 2 3 4

Condition Low High Left Frontal

0.650.700.750.800.850.900.95

Number of consecutive equal responses

Amplitude

1 2 3 4

Condition Low High Cingulum

Figure 3.6: Interaction-Plots showing the FFT-Amplitude for both con- ditions and String Lengths. Errorbars show standard-errors. Only the left frontal and anterior cingulum sources are shown as the others did not have any signicant results.

(50)

numDF denDF F-value p-value

(Intercept) 1 81 164.128 <.0001

Length 1 81 1.088 0.2999

Condition 1 81 0.16282 0.6876

Length * Condition 1 81 0.004 0.9494

Table 3.8: Linear Mixed-Eect-Model Statistic for the amplitude of the aSSR measured at the right-temporal source. Numerator degrees of freedom (numDF), denumerator degrees of freedom (denDF), F- and p-values are given for the full model.

3.3.2 Source coherence

Source coherence was calculated between all sources. All combinations showed increasing phaselocking for higher string lengths resulting in a signicant main-eect for "Length" (p < 0.0001). Table 3.9 and table 3.10 provide an overview of the main-eect for "Condition" and the interaction.

Interactions were found between the right parietal source and the left-temporal, right frontal and anterior cingulum. The latter combination also showed a signicant main-eect for "Condition" which also got signicant for the com- bination right temporal and right frontal.

Figure 3.7 shows that except for the combination "right temporal right frontal" the dierence in coherence was small for one to three repetitions of the same feedback but increased for four repetitions. The absolute phaselocking- value was consistently higher for the "high" condition.

The mean phaselocking-values themselves fail to reach signicant values as it is shown in tabel 3.11. The mean phaselocking-values rise with increasing string length but so do the values at which the phaselocking would become signicant.

(51)

le temp ri temp le front ri front le par ri par occ cing le temp 0.7299 0.1840 0.7256 0.3840 0.0109 0.3258 0.3543 ri temp 0.4576 0.2277 0.3286 0.4948 0.9819 0.2753

le front 0.1109 0.2254 0.4387 0.6916 0.9357

ri front 0.1151 0.0123 0.2076 0.8233

le par 0.5187 0.9571 0.5588

ri par 0.1033 0.0022

occ 0.4725

cing

Table 3.9: p-values of the interaction Length*Condition between all sources. Signicant eects are marked bold.

(52)

le temp ri temp le front ri front le par ri par occ cing le temp 0.4665 0.1572 0.7146 0.7438 0.1429 0.5349 0.9681 ri temp 0.2590 0.0222 0.1040 0.9973 0.1305 0.8949

le front 0.2816 0.1664 0.7907 0.9519 0.9377

ri front 0.3351 0.0570 0.5709 0.8830

le par 0.6316 0.6102 0.4542

ri par 0.9803 0.0398

occ 0.5760

cing

Table 3.10: p-values of the main-eect Condition between all sources.

Signicant eects are marked bold.

Referenzen

ÄHNLICHE DOKUMENTE

the name for the so-called Egyptian Lotus (see above), is given for alii and ämpal, whereas neytal is explained as &#34;white Indian water-lily, Nymphea lotus alba, blue

The stelae dedicated to Suchus &#34;the lord of Smen&#34;, and of &#34;the Imit-tree&#34; , uncovered on the temple ground and thereabout.. were placed there by great personages,

Hammerschmiclt (Hrsg.): Proceedings ofthe XXXII Intemational Congress for Asian and Nonh African Studies, Hamburg, 25th-30lh August 1986 (ZDMG-Suppl... in Los Angeles in

Yukie Nagai (National Institute of Information and Communications Technology, Kyoto, Ja- pan) based her work on findings from infant studies on two kinds of cognitive de-

Williams's original contribution to Arthurian legend lies in his develop- ment of the myths of K i n g Arthur and the Grail, their gradual coalescence, and the further history of

35 3.2.1.3 Influence of the modulation frequency on the amplitude of the ASSR 38 3.2.2 The impact of selective attention on the auditory steady-state

(2004) systematically studied the origin of steady-state responses generated by different AM frequencies: when looking at the left and right primary auditory cortex the amplitude

The main chapters of the thesis expose the chosen aspects of pastoral care like the function of imitation and examples, the role of the pastor in the pastoral act, the process