• Keine Ergebnisse gefunden

Examining Featural Underspecification of "TONGUE HEIGHT" in German Mid Vowels : an EEG Study

N/A
N/A
Protected

Academic year: 2022

Aktie "Examining Featural Underspecification of "TONGUE HEIGHT" in German Mid Vowels : an EEG Study"

Copied!
101
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

Universität Konstanz

Naturwissenschaftliche Sektion Fachbereich Psychologie

Examining Featural Underspecification of "T ONGUE H EIGHT " in German Mid

Vowels: An EEG Study

Diplomarbeit im Fach Psychologie

eingereicht von Verena Felder

Januar 2006

Erstgutachter: Prof. Carsten Eulitz Zweitgutachterin: Prof. Aditi Lahiri

(2)

Thank you ...

... Aditi Lahiri and Carsten Eulitz

for your support in conducting this study and for providing insight into the fascinating world of psycholinguistics during the past years.

... Claudia Friedrich

for the support and training that you gave me during my time as a Hiwi.

Thank you for your assistance in recording and evaluating the EEG data and for critical discussions on this Diplomarbeit. I really appreciated working

with you.

... Barış Kabak

for your explanations on the Turkish language system; and Barış Kabak and Sümeyra Erdem

for help in finding Turkish stimuli for the experiment.

Willi Nagl

for help with and discussions about statistics. Thank you for taking care that matters are understood rather than simply done.

... Barbara Awiszus and Christiane Wolf

for your flexible and reliable help in acquiring the EEG-data.

... Winfried B. Schlee and Susan, Christoph and Philip Gassner

for extremely helpful comments on content and style of this Diplomarbeit, respectively.

Mathias Scharinger and Frank Zimmerer

for the inspiring atmosphere in our office.

Angelika and Helmut Felder

for appreciating and encouraging a child’s curiosity. Thank you for allowing me to develop freely and for trusting in me. Thanks for unquestioning emotional and financial support. Sorry for alienating your concept of

(3)

0 ABSTRACT...1

1 INTRODUCTION ...2

1.1 Language: Common Sense or Astonishing Miracle? ... 2

1.2 Trying to Understand: Psycholinguistic Theories of Speech Perception ... 3

1.3 The FUL Model and its Basic Assumptions... 4

1.4 Experimental Evidence for the FUL model ... 7

1.4.1 Evidence for Underspecification from a Gating Study... 7

1.4.2 Evidence for Underspecification from Semantic Priming Experiments... 9

1.4.3 Evidence for Underspecification from Mismatch Negativity...11

1.4.4 Evidence for Underspecification from N400 Studies...13

1.5 The FUL Model under Fire: Its Opponents...14

1.6 The Present Study: Investigating the Representation of a Vowel in the Mental Lexicon15 1.6.1 Underlying Assumptions on the Representation of Features ...15

1.6.1.1 Reasons for the Underspecification of [] in German ...16

1.6.1.2 Reasons for the Specification of [] as [LOW] in Turkish ...16

1.6.2 Cross Modal Fragment Priming and its Behavioural and Electrophysiological Correlates.17 1.6.3 Experimental Design and Hypotheses for the Present EEG Study...20

1.6.3.1 In Search for a Phenomenon that Allows Insight ...20

1.6.3.2 Converting Abstract Assumptions into Concrete Predictions...23

1.6.4 Restriction to German Due to Missing Turkish Subjects ...25

2 METHODS ...26

2.1 Participants...26

2.2 Stimuli ...26

2.2.1 Target Words...27

2.2.2 Target Pseudowords ...28

2.2.3 Identical and Related Prime Fragments...29

2.2.4 Unrelated Prime Fragments...31

2.2.5 Critical primes...31

2.3 Electroencephalography (EEG) ...32

2.4 Experimental Design...33

2.5 Experimental Procedure...35

2.6 Data Analysis ...36

2.7 Statistical Analysis ...36

(4)

3 RESULTS...38

3.1 Behavioural Results ...38

3.1.1 Reaction Times...38

3.1.2 Error Rates ...39

3.2 Electrophysiological Results...40

3.2.1 Results for the 200 to 300 ms Time Window...43

3.2.2 Results for the 300 to 500 ms Time Window...45

4 DISCUSSION ...52

4.1 Behavioural Results ...53

4.2 Left Anterior Activation: The P350 Effect...54

4.2.1 Interpretation of the P350 Effect with Regard to the Crosslinguistic Design...56

4.2.2 A General Discussion on the P350 Effect ...57

4.3 Right Anterior Activation in the Time Course of the P350 ...59

4.4 The N400 Effect...62

4.5 Future Directions: Could Swiss German be Investigated instead of Turkish? ...64

4.6 Rethinking the FUL Model’s Assumption of Ternary Matching Logic ...65

4.7 Conclusions on the Present Study...71

5 REFERENCES...72

6 APPENDIX ...77

6.1 Appendix A: German Words and Pseudowords ...77

6.2 Appendix B: Abstractness Judgement ...79

6.3 Appendix C: German Fragments Used as Primes ...81

6.4 Appendix D: Problematic Items ...85

6.5 Appendix E: Electrode Positions ...87

6.6 Appendix F: Instruction ...88

6.7 Appendix G: ERPs over all electrodes for []-words and []-words ...89

(5)

6.11 Appendix L: Index of Tables...93 6.12 Appendix M: Index of Figures ...96

(6)

0 Abstract

The present study was originally designed as a crosslinguistic investigation of asymmetries in phonological feature representations. The theoretical background of the hypotheses was the Featurally Underspecified Lexicon Model by Lahiri and Reetz (2002). This model assumes that words are stored in an abstract way, in terms of phonological features. Features that can be extracted from the speech signal, but are redundant or prone to variation within the language are not stored, resulting in underspecified lexical entries. Based on this model it was hypothesised that the vowel [] differs in its phonological representation of TONGUE HEIGHT between Turkish and German in that it is represented as [LOW] in Turkish and underspecified for this feature in German. This hypothesis was set to test in a cross modal fragment priming study, in which subjects listen to spoken word onset fragments and are asked to decide whether an immediately following visually presented letter string is a word or not. Three conditions were included: Identical (prime fragment is first syllable of target item), related (prime fragment differs from target in nucleus of first syllable) and unrelated (prime fragment does not share segments with first syllable of target). The crucial condition was the related one. Fragments with an [] as nucleus were expected to prime target words with an [] at the respective position in both languages. Contrary, a prime fragment with [] was expected to prime a corresponding target word with [] in German but not in Turkish because the feature [HIGH] extracted from the [] in the signal should mismatch with the feature [LOW] in the Turkish representation of []. The data presented here are restricted to German. Reaction times and error rates did not confirm the predictions. Only identical prime fragments eased lexical decisions to target words whereas the related condition did not differ from the unrelated control condition. An event related brain potential component known as the P350 was used as a neurophysiological index of lexical activation of target words. Mean amplitudes did not differ between the identical and related conditions whereas they both differed from the unrelated control condition. No difference in the P350 effect has been found for target words with [] as compared to []. These results are in line with the predictions.

The identical as well as the related condition lead to priming. However, caution is demanded in interpreting the symmetric aspects of the results because the hypothesis tested here is

(7)

1 INTRODUCTION

1.1 Language: Common Sense or Astonishing Miracle?

I love chocolate and ice-cream and I use to eat a lot of it. One day I decided to stop eating sweets. On a nice morning I sat in the kitchen craving for chocolate, as my friend came in, opened the fridge and asked: “Willst du auch ein Ei zum Frühstück?”

([vlst du ax an a tsm frytk]; ‘Do you also want an egg for breakfast?’). I got very mad at her because my deprived brain had heard: “Willst du auch ein Eis mit Früchten?” ([vlst du aux ain ais mt frtn]; ‘Do you also want an ice-cream with fruit?’). What had gone wrong? Her speech was as clear as always and my ears worked well. It was the thing between my ears that parsed eggs into ice-cream and breakfast into fruit. Fortunately, such breakdowns of our speech perception system are very rare and we usually understand without any effort at all what others tell us. However, the fact that they occur, tells us that there is more to language than just a simple one to one mapping.

Communication via language demands two forms of basic competency: the ability to produce language sounds and the ability to recognise these sounds as language and to decode them into meaningful units. Today we have a rather good understanding of the functioning of the motor and sensory organs involved in speech production and perception. The challenge that remains is to figure out what goes on right before, nearby and after speech is uttered and perceived. It’s the functioning of the brain that challenges psychologists and where language meets the brain, linguists meet psychologists and they end up in the thrilling field of psycholinguistics.

During the past decades psycholinguists have come up with a vast amount of theories, concerning both, speech production and perception. In the following passage I will confine to speech perception. In most cases we easily understand what others tell us. Words seem to reach us as clear cut entities and listening does not exhaust us too much. But, what seems to be so trivial turns into something that can hardly be understood as soon as one has a closer look at it. The stream of acoustic energy that hits our ear can not be neatly divided into clearly separable pieces, each of them representing one word. In other words, the physical speech signal is quasi-continuous.

(8)

Furthermore, the speech signal of the same phonetic segment varies within and across different words. Among others, it is modulated by phonological context, syllabification and stress pattern. In addition, speech is coarticulated. While producing one sound, the vocal tract already prepares for the next segment. That is, a segment is not invariant. Its form is determined by the neighbouring segments, not only within a certain word, but also across word boundaries. As a consequence, a word can not be regarded as an ever constant entity, but surfaces in many different forms.

Other factors that influence the speech signal are the position of the words in the prosodic structure of a sentence, the speaker’s sex and age, dialectal variation, speaking rate and style, background noise, etc. For more details see McQueen (2004).

1.2 Trying to Understand: Psycholinguistic Theories of Speech Perception

Any theory on speech recognition has to handle these variations in the signal. During the past decades many speech recognition models have been proposed. They all assume the existence of a mental lexicon, which stores “those aspects of the representation of lexical form that participate directly in the process of recognising spoken words, allowing the listener to identify the sequence of lexical items being produced by a given speaker” (Lahiri & Marslen- Wilson, 1991; p. 246). However, different models have very different views on the nature of the relevant aspects of representation. Many psycholinguistic theories assumed that the lexical entry of a word closely resembled its surface form (e.g. McClelland & Elman, 1986;

Norris, 1994). According to them the lexical entry consists of phoneme-like strings, representing the segments of the word. These assumptions were in line with the account given by Chomsky and Halle (1968) who proposed that the underlying segments are represented as unordered columns of features, defining a linear string of phonemes. Knowing there is a lot of variation in the surface form of a word, intermediate levels of speech processing have to be assumed to handle these inconsistencies in the input. These pre-lexical levels are in charge of converting the continuous input signal into distinct phoneme-like units that can be fed to the mental lexicon. For instance, the TRACE model (McClelland & Elman, 1986) assumes a phonemic level intervening between a featural and a lexical level. Any feature that can be

(9)

Bybee (2000). She claims that words are the units of lexical storage. The mental lexicon can be regarded as a memory storage in which every experience of a word is stored together with previous examples of the same word.

In an attempt to find alternative ways of mental representation – probably more economic ones that are more resistant to variance in the signal – several psycholinguists have started to question the storage of lexical items in terms of segments. Segmental lexical entries require us to assume either multiple lexical entries for a single word and/or intermediate pre- lexical stages of processing. This could be circumvented by abstractly representing the words in the mental lexicon.

Abstract representation means that the underlying form of a word is not equivalent to its surface form. Not all features that can be extracted from the surface form have to be stored in the mental lexicon in order to unambiguously identify a given word. A lot of features are not distinctive and can be derived by redundancy rules. For instance, the feature [NASAL] is not distinctive in English vowels. Its occurrence can be derived by rule. There is no case in which a minimal pair has to be distinguished based on vowel-nasality. A vowel in English will simply be nasalized when followed by a nasal consonant. Therefore, storage of the feature [NASAL] for vowels in the mental lexicon does not contribute to the process of word selection (Lahiri & Marslen-Wilson, 1991).

The notion of abstract representation in the mental lexicon claims that items are not stored as strings of segments, but in terms of distinctive features. Any feature that is not distinctive but predictable or due to properties of the speaker (e.g. gender, age, etc.) is underspecified in the mental lexicon. In the following a model that assumes underspecified recognition is described in more detail. The experiment reported in this paper is based on the assumptions of this model.

1.3 The FUL Model and its Basic Assumptions

The Featurally Underspecified Lexicon (FUL) model (Lahiri & Reetz, 2002) claims that each morpheme has only one single lexical entry. No phonological variants are stored. This entry is not segmental but consists of hierarchically structured features. In order to cope with variance in the acoustic signal, not all possible features are stored in the mental lexicon.

Features that are subject to change are underspecified. That means for each segment the mental lexicon stores sufficient features to clearly identify and distinguish it from all other segments. However, features that can vary due to segmental or prosodic context or due to

(10)

dialectal or speaker characteristics etc. are not stored in the mental lexicon, i.e.

underspecified. The exact feature representation of each segment depends on universal properties and language specific requirements. Therefore, the same segment can have different lexical representations in different languages (Lahiri & Marslen-Wilson, 1992;

Winkler, Lehtokoski, Alku, Vainio, Czigler, et al., 1999) and the representation of a segment can undergo change, as the language itself changes over time (Ghini, 2001).

In the process of word recognition the acoustic features are extracted from the speech signal. All features are extracted regardless of whether they are stored in the mental lexicon or not. In the next step these acoustic features are transformed into phonological features. In turn these are mapped directly onto the mental lexicon. There is no intermediate, pre-lexical level of processing and there are no attempts to break down the speech stream into isolated segments. The mapping process works with a ternary matching logic. Features from the signal can either match, mismatch or not mismatch the features in the mental lexicon. If a feature that is extracted from the signal is also found in the mental lexicon, then there is a match. A mismatch occurs if a certain feature is extracted, but the lexicon contains another feature that is incompatible with the extracted one. The mismatching relationship can be bidirectional or unidirectional (see Table 1.1 for examples). Finally, a nomismatch condition is created either if no feature is extracted from the signal although there are features stored in the lexicon, or if a feature is extracted from the signal but not stored in the lexicon.

Table 1.1: Examples for the conditions match, mismatch and nomismatch; 1[CORONAL] is assumed to be underspecified in the mental lexicon, therefore the place of articulation is not represented here. The underspecification of [CORONAL] also explains the unidirectionality of the [CORONAL] - [DORSAL] mismatch. 2E.g. /e/ does not mismatch /ö/, although the latter is [LABIAL] while the former is not.

Matching Signal Lexicon

match [DORSAL] [DORSAL]

mismatch, bidirectional [HIGH] [LOW]

mismatch, unidirectional [CORONAL] [DORSAL]

nomismatch [DORSAL] []1

nomismatch [] [LABIAL]2

(11)

match between the surface form and the underlying form strengthens the word candidate. A nomismatch does not remove an item from the list of possible candidates, but does not increase its activation level either. As soon as a mismatch between signal and lexical entry occurs, the word in question is removed from the set of possible word candidates. At some point, the input is sufficiently distinctive to select the right lexical entry. Speaking in terms of activation, one item in the cohort will at some time clearly surpass its competitors in level of activation.

With its ternary matching logic, the system overgenerates possible word candidates but it still removes impossible ones from the cohort. Since not the segments per se are stored, but their abstract phonological features, some information about these segments in the incoming speech signal might be missing or influenced by phonological context, and still the listener is able to correctly identify the input word. Every lexical item that does not explicitly mismatch the features extracted from the signal will stay in the cohort of possible target words.

What exactly is stored in the mental lexicon according to the FUL model? The lexicon contains phonological, morphological, semantic and syntactic information for each word. Only the information about features is used to find word candidates in the lexicon. All additional data help excluding unlikely candidates on a higher level of processing. These higher level processes do not wait until they are fed with a few remaining word candidates.

They operate in parallel with the basic feature mapping procedure right from the beginning of word perception (Lahiri, 1999; Lahiri & Reetz, 2002). This paper restricts itself to the phonological aspects of the mental lexicon.

In the lexicon a segment is represented with a root node and its hierarchically structured relevant features. “This hierarchical representation reflects the fact that phonological processes consistently affect certain subsets of features and not others.

Individual features or subsets of features are functionally independent units and are capable of acting independently. (…) features are organised into functionally related groups dominated by abstract class nodes (such as place). The phonological features are the terminal nodes, and the entire feature structure is dominated by the root node (made up of the major class features like [CONSONANTAL] and [SONORANT]) which corresponds to the traditional notion of a single segment.” (Lahiri, 1999, p. 251) It is also crucial to note, that place features are split into two independent nodes: an articulator node containing Place of Articulation and a Tongue Height node. Vowels and consonants share the same place features. Features are monovalent (Lahiri & Reetz, 2002).

(12)

ROOT [VOCALIC]/[CONSONANTAL]

[OBSTRUENT]/[SONORANT]

LARYNGEAL [CONTINUANT]

[NASAL] [VOICE] [SPREAD GLOTTIS] [STRIDENT] [ATR/RTR] [LATERAL] [RHOTIC]

PLACE

ARTICULATOR TONGUEHEIGHT/

APERTURE

[LABIAL] [CORONAL] [DORSAL] [RADICAL] [LOW] [HIGH] velar uvular [GLOTTAL] [PHARYNGEAL]

dental alveolar palatal palatoalveolar retroflex

Figure 1.1: Feature geometry following FUL (Nov. 2005); the names in italics are not features, only general descriptive terms of places of articulation

1.4 Experimental Evidence for the FUL model

1.4.1 Evidence for Underspecification from a Gating Study

One of the first experimental studies on underspecification in the mental lexicon was conducted by Lahiri and Marslen-Wilson (1991). In a cross-linguistic study they showed that the listeners’ behaviour in a gating task is not determined by the surface form of words, but by their underspecified underlying form. The experiment was based on the different lexical status of the feature [NASAL] in Bengali and English. In both languages nasal and oral vowels occur. However, in Bengali nasality is a distinctive feature, whereas in English the nasal-oral

(13)

vowel. In no other case one encounters nasal vowels in English. Since nasality can be easily derived by rule, there is no need to store the feature [NASAL] in the mental lexicon. Therefore, the FUL model assumes that English vowels are underspecified for nasality. Whenever an English speaker encounters a nasal vowel, he anticipates a nasal consonant following that vowel.

On the contrary, in Bengali nasality has to be fully specified, because it is a phonemic feature. There are seven oral vowels in Bengali and all of them have a corresponding nasal vowel. Therefore, minimal pairs like [kap] and [kap] exist in the language. The /a/ in [kap] has to be specified as nasal. In CVN-words (N stands for a nasal consonant) like [kam] the vowel is assumed to be underlyingly oral with vowel-nasality being due to assimilatory processes. Consequently, a nasal vowel in Bengali creates ambiguity until the following consonant is perceived. If this consonant is [NASAL], it is clear that the nasality of the vowel was due to assimilation. In case the following consonant is oral, nasality of the vowel is caused by its lexical representation as [NASAL]. As long as ambiguity prevails, Bengali speakers are assumed to interpret surface nasality as underlying nasality, and thus accessing CṼC words (words with a nasal vowel between two non-nasal consonants) instead of CVN words.

These predictions were tested in a gating task. In such a task, subjects are confronted with the acoustic onset of a word. Stepwise they are provided with more and more information about the word being heard. At every increment they are asked to write down the word they thought they were hearing. English subjects were tested on CVC and CVN words, Bengali subjects on CVC, CVN and CṼC words. The first gate was at the fourth glottal pulse after vowel onset. Further gates followed in steps of 40 ms.

The nasal vowels in CVN and CṼC stimuli strongly biased Bengali subjects towards CṼC responses. They interpreted the nasalization as a cue for a nasal vowel, not as a cue for nasality in the following consonant. Moreover, listeners produced more CVN responses to CVC stimuli than to CVN or CṼC stimuli. When Bengali subjects were presented with stimuli derived from CVN words of which no CṼC counterpart existed in the language, they either produced CVN responses or invented new CṼC stimuli.

For the English subjects vowel nasalization in CVNs was an unambiguous cue for a following nasal consonant. In line with this, the proportion of CVN responses increased steadily to vowel offset. Both, Bengali and English subjects produced CVC and CVN words in response to CVC stimuli, their amount reflecting their approximate distribution in the

(14)

languages. This supports the assumption, that the vowels in CVCs and CVNs are represented as oral vowels. Were the mental lexicon similar to the surface form of the words, CVC stimuli should have only triggered CVC responses in both languages, while a nasal vowel in Bengali should have triggered CVN responses as well as CṼC because both vowels were assumed to be represented as [NASAL] (Lahiri & Marslen-Wilson, 1992).

Since gating may reflect post-access operations, it is considered as an off-line paradigm (Grosjean, 1996). However, there is also evidence for a featurally underspecified lexicon from on-line techniques as presented in the following paragraphs.

1.4.2 Evidence for Underspecification from Semantic Priming Experiments

Further evidence for underspecified representations in the mental lexicon comes from studies on assimilation, which can lead to surface variation. Assimilation of place of articulation has been intensively investigated. It is observed, that segments with a coronal place of articulation can easily assimilate to a labial or dorsal place of articulation. For instance, the word green can be realized as gree[m] in a labial context like gree[m] bag or it can surface as gree[] in a dorsal context like gree[] grass. However, one never encounters the case that a labial or a dorsal segment changes its place of articulation. For instance, crea[m] cheese does not become crea[n] cheese in the coronal context (Lahiri & Reetz 2002). Despite the variation in the surface forms of green, the listener has no problems to access the correct lexical entry. The FUL model accounts for this phenomenon by assuming an underspecified representation of the place feature [CORONAL] and fully specified representations of the place features [dorsal] and [labial]. In the case of gree[m] bag one would extract the feature [LABIAL] from the acoustic signal of [m]. The feature [CORONAL] of the underlying [n] of gree[n] is not represented in the lexicon and hence a nomismatch will result. If, on the contrary, a speaker produced crea[m] cheese as crea[n] cheese, [CORONAL] would be extracted from the signal and matched onto a labial place feature in the mental lexicon. This would lead to a mismatch and cream would be rejected as a possible target word.

Experimental evidence for the underspecification of [CORONAL] was provided by Lahiri and van Coillie (Lahiri & van Coillie, 1999). A crossmodal semantic priming

(15)

the subjects. Right thereafter, semantically related words like Zug (‘train’) or Krach (‘bang’) showed up on a screen. In the semantically unrelated condition, words like Maus (‘mouse’) or Blatt (‘leaf’) were used as primes for Zug and Krach. Subjects were asked to decide whether the visually presented strings were words of their language or not. Reaction times were expected to be faster in the semantically related condition than in the unrelated condition.

In order to test the postulate of coronal underspecification, the final Place of Articulation of the acoustic primes was altered. For instance, the coronal /n/ in Bahn was turned into a labial /m/, resulting in the pseudoword *Bahm. Similarly, the /m/ in Lärm was pronounced as /n/, ending up with the pseudoword *Lärn. The FUL model predicts that

*Bahm will activate the word Bahn, while *Lärn should not be able to prime Lärm. These predictions arise from the ternary matching logic. Since [CORONAL] is not stored in the mental lexicon, the /n/ in Bahn is underspecified for place. [LABIAL] will be extracted from the acoustic signal of *Bahm and mapped onto the lexicon. No corresponding entry for a place feature will be found and hence the procedure will result in a nomismatch situation, leaving Bahn activated. In fact, no difference in matching is expected between the surface forms Bahn and *Bahm onto the underlying form of Bahn. On the contrary, [CORONAL] is extracted from the /n/ in *Lärn and mapped onto the [LABIAL] feature of the /m/ in Lärm.

This will result in a mismatch, removing Lärm from the cohort. In terms of reaction times, the FUL model expects fast reactions in response to the semantically related word Zug when preceded by *Bahm and slower reactions to Krach when preceded by *Lärn. The subjects’

reaction times confirm these assumptions. Bahn as well as *Bahm significantly primed Zug compared to the control Maus. In contrast, only Lärm significantly speeded up the reactions to Krach, whereas reaction times in the condition with *Lärn did not differ significantly from the control condition.

Similar results were obtained for word medial variation using the same experimental design. The word Düne (‘dune’) primed the semantically related word Sand (‘sand’).

Similarly its corresponding pseudoword *Dü[m]e primed Sand. However, although the word Schramme (‘a scratch’) primed its semantic relative Kratzer (‘a scratch’), the pseudoword

*Schra[n]e did not do so. The reason for this asymmetrical effect is again thought to be the underspecification of the place feature [coronal] in the mental lexicon.

(16)

1.4.3 Evidence for Underspecification from Mismatch Negativity

Not only consonants but also vowels are assumed to be underspecified for coronal place of articulation. This assumption was tested by Eulitz and Lahiri (2004) by means of event- related brain potentials reflected in the electroencephalogram (EEG), more specifically, in terms of mismatch negativity.

The EEG derives from summated postsynaptic electrical potentials, generated from cells in the cortex. These potentials are registered with electrodes attached to the scalp (Davidson, Jackson & Larson, 2000). From these signals, event-related brain potentials (ERPs) can be calculated. ERPs are regarded as a manifestation of brain activity that is time locked to internal or external events. The brain’s response to a single event will not be visible in the signal because the surrounding brain activity causes too much noise. ERPs are gained by averaging a lot of brain responses to the same stimuli (or at least to the same category of stimuli). This way, the non-systematic noise is cancelled out while the systematic brain responses that are triggered by the event show up. An ERP consists of several positive and negative peaks in the signal that differ from each other in scalp distribution, polarity and latency. Consequently, an ERP can be regarded as an aggregate of a number of ERP components. A component is commonly taken to reflect the tendency of segments of the ERP waveforms to covary in response to specific experimental manipulations (Fabiani, Gratton &

Coles, 2000).

“The mismatch negativity (MMN) is a frontocentrally negative component of the auditory event-related potential (ERP), usually peaking at 100-250 ms from stimulus onset, that is elicited by any discriminable change in some repetitive aspect of the ongoing auditory stimulation irrespective of the direction of the subjects attention or task” (Näätänen, 2001, p.1). That is, supposed a subject repeatedly heard the same phoneme and all of the sudden this presentation was interrupted by another phoneme for a single trial, this so called deviant phoneme elicits a negativity in the EEG signal, the MMN, indicating that a significant change was detected. “Importantly, the repetitive ‘standard’ stimulus element does not have to be acoustically constant for MMN to be elicited, as long as some pattern or rule is shared by the standards. MMN is then elicited by stimuli violating this pattern or rule” (Näätänen, 2001,

(17)

the phonological representation in the mental lexicon, which is, in linguistic terms, the underlying representation. The percept created by the infrequent deviant stimulus is more low level and has vowel specific information available around 100 ms after stimulus onset. This percept corresponds in part to the set of phonological features extracted from the acoustic signal, the so-called surface form. In this way the MMN can be used to study the difference between the surface form, extracted from the deviant, and the underlying representation, created by the standard” (Eulitz & Lahiri, 2004, p.577-578, and references therein).

In the experiment reported by Eulitz and Lahiri (2004) the vowels [e], [o] and [ø] were used. [e] and [ø] are coronal vowels whereas [o] is dorsal. The crucial stimulus pair was [o] and [ø]. When [o] was taken as standard, the activated central sound representation was dorsal. From the infrequent deviant stimulus [ø] the feature [CORONAL] was extracted. This mismatched with the feature [DORSAL], evoking an enhanced mismatch negativity, expressed in an earlier peak latency and a higher amplitude in the ERP compared to situations without a conflict between standard and deviant. Such a nonconflict situation was created as standard and deviant were reversed. With [ø] as standard the activated underlying representation contained no feature for Place of Articulation because [CORONAL] is underspecified. Thus the feature [DORSAL] from the deviant [o] did not mismatch the central sound representation and consequently did not lead to MMN. See Figure 1.2. Since the acoustic difference between standard and deviant was exactly the same in both conditions, the asymmetric EEG responses point to mismatch effects at a higher level of representation. By assuming that the vowels had activated their lexical representations, the FUL model can nicely explain these results.

Figure 1.2: ERP waveforms depicting the MMN, taken from Eulitz and Lahiri, 2004. The legend refers to the deviant stimulus.

For the vowel pair [e] and [ø] the direction of presentation did not make a difference. Both phonemes are coronal, thus leading to a nomismatch situation in any case.

(18)

With the same design Eulitz and Lahiri (2003) gained evidence for the assumption that the Height feature of the vowel [] is [low] in Bengali whereas in German, [] is assumed to remain unspecified for Tongue Height. When presented [] as the standard stimulus the Bengali subjects set up a [LOW] central sound representation while the German subjects did not. Hence, a [HIGH] deviant stimulus, in this case an [u], elicited a mismatch response in the Bengali speakers but not in the German ones. This again contradicts the assumption that speech recognition is based in a simple forward manner on phonetic characteristics of the input.

1.4.4 Evidence for Underspecification from N400 Studies

Another EEG study, using a different design and investigating another event related brain potential, was conducted by Friedrich, Lahiri and Eulitz (submitted). This time words were used rather than isolated segments. Subjects were asked to listen to 240 spoken utterances, half of which were words, the other half pseudowords, and to decide whether they had just heard a word or a pseudoword. The pseudoword differed from the word only with respect to the place feature of the initial segment. Half of the words had a coronal stop or nasal as their onset (e.g. Drachen, ‘dragon’), the other half began with a labial or a dorsal stop or nasal (e.g.

Grenze, ‘frontier’). Pseudowords were created by reversing the initial place features so that a coronal word onset turned into a non-coronal one and vice versa. For instance, the corresponding pseudoword to Drachen was *Brachen and Grenze became *Drenze.

While the subjects performed the lexical decision task, reaction times were measured and their electrical brain activity was recorded. An extensively investigated ERP component is the N400, a language related negative going potential that peaks around 400 ms after word onset. However, this latency can vary considerably. 400 ms hold true for the presentation of written words. Longer latencies have to be expected for acoustic materials since information is presented incrementally. The N400 component has been first described by Kutas and Hilyard (1980) as reflecting aspects of semantic processing. In the 25 years since then the N400 has been related to many different processes, not only semantic in nature. For instance,

(19)

versus a non-coronal onset. From pseudowords with a coronal onset like *Drenze the feature [CORONAL] is extracted from the signal. The corresponding word Grenze will not be activated as a possible target word, because its stored feature [DORSAL] mismatches with [CORONAL]. It can be obviously dealt with as a pseudoword and as such elicits the N400 pseudoword effect.

However, if a non-coronal pseudoword like *Brachen is perceived, [LABIAL] is extracted from the signal and matched onto underlying representations. In the lexical entry of Drachen coronality is not specified and therefore Drachen will be activated to some extent.

Consequently non-coronal pseudowords should not elicit the N400 pseudoword effect or then produce a significantly smaller one.

Behavioural results show that subjects performed the lexical decisions faster for words than for pseudowords. Crucially within the pseudowords coronal pseudowords were responded to significantly faster than non-coronal pseudowords. The enhanced reaction times are interpreted as reflecting the lexical ambiguity created by a non-coronal segment. The N400 effect was observed surprisingly late in a 600-900 ms time window. Pseudowords elicited a larger N400 than words. As predicted, the coronal pseudowords elicited a larger N400 than non-coronal ones, whereas the N400 amplitudes of coronal and non-coronal words did not differ significantly.

1.5 The FUL Model under Fire: Its Opponents

Above, experimental evidence for the FUL model was discussed. Its assumptions were confirmed in very different experimental settings with behavioural as well as electrophysiological measurements, in consonants and vowels, word-initially, -medially and – finally, across and within languages.

Of course the FUL model also has its opponents. By the extension of the binary logic of match versus mismatch into a ternary matching logic of match, mismatch and nomismatch, the FUL model gains flexibility in dealing with surface variation. At the same time it enlarges the cohort of possible word candidates. Proponents of full specification of items in the mental lexicon (e.g. Johnson, 2004) criticise the FUL model for not punishing a nomismatch. They claim that this way the model is no longer able to distinguish between viable candidate words in the cohort. Empirical evidence against the FUL model arose from some studies on assimilation. In immediate repetition priming experiments Gow (2001, 2002) seemingly contradicted the predictions of the FUL model. Labial and dorsal word final segments that were mispronounced as coronals still activated the original words although the FUL model

(20)

had expected a mismatch between a coronal in the signal and a non-coronal in the mental lexicon (Gow, 2001). Similarly, a coronal that had become a labial due to assimilation and as such was acoustically similar to another existing item (e.g. right-ripe, rum-run) only activated its original, non-assimilated form instead of leading to lexical ambiguity as the FUL model had predicted (Gow, 2002; Gaskell & Marslen-Wilson, 2001).

Experiments that yielded evidence for and against the FUL model differ in the experimental design that was used, in the nature of the prime words, be it fragments, pseudowords or real words, in the nature of the priming per se, be it repetition priming, semantic priming or fragment priming, and in leading to lexical ambiguity or not, just to mention the differences that are obvious on first sight. The criticisms on the FUL model do not immediately affect the present experiment. Therefore they will not be discussed in more detail in this section. See part 4.6 for a further discussion.

1.6 The Present Study: Investigating the Representation of a Vowel in the Mental Lexicon

1.6.1 Underlying Assumptions on the Representation of Features

The FUL model claims that the lexical entry of a word does not correspond in a straight forward way to its surface representation. Only relevant information that can not be derived elsewhere is stored. This opens the possibility that a segment which is acoustically identical in two languages is represented differently in the mental lexicon, depending on the structure of the language. Ghini (2001) has already shown that one single surface form can have two different underlying representations in the mental lexicon within a single language. Like many Italian dialects Miogliola has lost the quantity distinction that was present in Latin. In sonorants, this led to a neutralisation of the length contrast. (It did so also in obstruents but they underwent further processes that saved contrast). The labial nasal [m] could not do anything about this. Cleverly, the coronal nasal [n] transformed the quantity distinction into a place distinction. The original Latin single coronal nasal /n/ remained unspecified for place, while the geminate /n:/ degeminated and became specified for [CORONAL]. Hence, only the formerly single /n/ undergoes change in place depending on its context.

(21)

different assignment of Tongue Height features in Bengali versus German vowels (see section 1.4.3).

The aim of the present study was to examine hypothesised differences between German and Turkish in the representation of the place features of the vowel []. In Turkish [] is assumed to have the Articulator feature [CORONAL] and the Tongue Height feature [LOW]. Since [CORONAL] is unspecified in the mental lexicon, [] is assumed to only be specified as [LOW]. On the contrary, in standard German [] is not treated as a low vowel, leaving it unspecified for any place feature.

1.6.1.1 Reasons for the Underspecification of [] in German

As far as Tongue Height in Vowels is concerned, German is assumed to have high, low and mid vowels. /u/ and /i/ are assumed to be [HIGH], /a/ is [LOW], and /e/ and /o/ are regarded as mid. Since there is a three-way distinction in the German vowel system, both features [HIGH] and [LOW] are assumed to be specified in the mental lexicon. There is no need to specify a third component, because anything that is neither [HIGH] nor [LOW] is automatically mid.

‘Mid’ is not considered a Tongue Height feature and there is no such thing as ‘mid’ extracted from the speech signal. Mid vowels are simply vowels somewhere in between the low-high dimension. Furthermore [] is [CORONAL]. However, coronal is not specified either in the mental lexicon. This leaves the German [] entirely unspecified for Place.

1.6.1.2 Reasons for the Specification of [] as [LOW] in Turkish

The assumption that [] is specified as [LOW] in Turkish relies, among others, on the patterning of Turkish vowels in vowel harmony. Vowel harmony is an active and productive process as far as affix allomorphy is concerned. The vowels in affixes have to meet certain criteria and adapt to certain phonological features of the root vowels. In Turkish suffixes that undergo vowel harmony contain either a low vowel or a high vowel (e.g., Lahiri, 2000). For instance, the suffix indicating the plural of a noun contains an underlyingly low vowel and yields either –ler or –lar on the surface. The choice of the allomorph is conditioned by the front/back property of the last stem vowel. That is, in the case of ip (‘rope’), /i/ is front and

(22)

therefore the plural suffix will be –ler, forming ipler (‘ropes’). Son (‘end’) has a back vowel and thus will form the plural sonlar (‘ends’) (examples are taken from Halle & Clements, 1983, also cited in Lahiri, 2000). Similarly, the dative suffix is formed with only low vowels, yielding –e or –a.

Suffixes that contain an underlyingly high vowel undergo both palatal as well as labial assimilation, thereby adjusting to the front/back properties as well as to the round/unround dimension of the root vowel. For instance, the genitive suffix contains a high vowel, surfacing as one of –in, -un, -ün, or -n, depending on the frontness/backness and the roundess property of the stem vowel. Since only high vowels undergo labial harmony, the low round vowels [o] and [ø] never show up in suffixes (Kabak, personal communication).1 This explains the need for a phonemic contrast between suffixes containing an underlyingly [HIGH] vowel and those containing an underlyingly [LOW] one.

The fact that [a] and [] consistently pattern together in productive vowel harmony suggests that [LOW] vs. [HIGH] must somehow be defined as part of the representation of Turkish affixes in the mental lexicon. However, there is no consensus on whether both Tongue Height features, namely [LOW] and [HIGH], have to be stored in the Turkish mental lexicon, or if one of them is sufficient. Kabak (2004; personal communication), for instance, argues that all vowels in Turkish are phonologically either [HIGH] or [LOW]. No three-way distinction as in German is necessary. Hence it would be redundant to store both features. Any vowel that is not [LOW] will automatically be classified as [HIGH] for the matters of phonology. Furthermore, Kabak (2004) and Kabak and Vogel (to appear) argue that the underspecification of [HIGH] provides an explanation for vowel/consonant-epenthesis, as well as vowel assimilation phenomena in Turkish in a simple and straightforward fashion.

1.6.2 Cross Modal Fragment Priming and its Behavioural and Electrophysiological Correlates

If these assumptions are true and [] is underlyingly specified as [LOW] in Turkish and

(23)

lexicon differs in activation pattern from a German mental lexicon. Such a case can be created in cross modal fragment priming.

In cross modal fragment priming, a word fragment is presented acoustically to a subject. Immediately at its offset, a word is presented visually on a screen. This word can either contain the fragment that has just been heard, it can differ slightly from the preceding fragment or it can be completely unrelated to it. The same proportion of words and pseudowords is used as visual target words. Participants are asked to do a lexical decision on the targets, indicating by pressing a button whether they had just read a word or a pseudoword.

It is generally assumed that the lexical entry for a word is a modality-independent entity. Auditory and visual information is first processed via two modality-dependent access routes that work independently of each other but feed both into modality-independent lexical entries (Marslen-Wilson, 1990). Cross modal priming provides us with the possibility to investigate lexical access, ruling out the danger that the data are simply due to superficial form priming effects. Expectations on the effects of cross modal fragment priming are based on the assumption, that incoming speech input activates a cohort of possible word candidates.

As more and more information about the input is available, this cohort is narrowed until one item remains. In most cases a fragment, namely the onset of a word, presented acoustically, does not provide the mental lexicon with sufficient information to restrict the cohort on a single item. More than one possible target will stay activated. It is hypothesised, that the activation level of a word in the mental lexicon is mirrored by a person’s reaction time. That is, if the word, that appears on the screen after the fragment has been perceived, has been activated by that fragment and still remains in the active cohort, then the subject will respond to it faster. For instance, Marslen-Wilson (1990) has shown that the acoustic presentation of the fragment fee-, gained from the English word feel, speeded responses to the visual targets feel as well as feed. Subjects were faster to tell that feel and feed are words, compared to some completely unrelated word like name.

Cross modal fragment priming has also been shown to work in semantic priming. In Dutch, the acoustic fragment kapi led to faster lexical decisions in response to the visual targets geld (‘money’) and schip (‘ship’) as compared to semantically unrelated targets. Geld is semantically related to kapitaal (‘capital’), schip is related to kapitain (‘captain’), both beginning with kapi- (Zwitserlood, 1989).

In the examples reported so far, the acoustic input was entirely identical to the probes that were primed by it. Before, it was postulated, that the mental lexicon is underspecified for

(24)

certain features. Therefore, it is expected to find cross modal fragment priming effects also in cases where the probe deviates from the acoustic prime, as long as this deviation does not lead to a mismatch condition. These assumptions have been confirmed in several studies. For instance, Friedrich, Lahiri and Eulitz (submitted) conducted the experiment described in section 1.4.4 once more with a cross-modal fragment priming design. The identical coronal fragment (e.g. drach-) as well as the non-coronal pseudoword-onset (*brach-) were assumed to prime the target (in this case Drachen) as compared to unrelated primes. On the contrary, non-coronal targets should have only been primed by their identical, non-coronal primes (e.g.

gren-Grenze), but not by their coronal pseudoword primes (e.g. dren-Grenze). Not only reaction times were measured but also the EEG signal was recorded. The reaction time results did not confirm the FUL model. In both conditions only the identical fragments primed.

However, the design of the study was not a proper behavioural design but served the needs of an EEG-experiment. Event related potentials confirmed the assumptions of the FUL model.

In cross modal fragment priming studies, a component called P350 has been consistently observed. This P350 ranges between 200 and 400 ms and differentiates matching from mismatching or unrelated prime-target pairs (Friedrich, Kotz, Friederici & Alter, 2004;

Friedrich, Kotz, Friederici & Gunter, 2004). Peaking at about 350 ms after the onset of the visual stimulus, the ERP amplitude is consistently more negative for matching prime-target pairs as compared to mismatching or unrelated conditions. The P350 effect has been defined as a difference wave, resulting from the subtraction of the somehow matching or mismatching conditions from the unrelated condition. Subtraction of matching words from unrelated control words reveals more positive amplitudes than subtraction of mismatching words from unrelated controls. This effect is observed predominantly over the left hemisphere.

For the above mentioned experiment the predicted asymmetric results were indeed obtained in the EEG data. The P350 difference wave for target words with coronal onsets was equally positive for both conditions, the one with the identical prime and the one with the prime that was altered in Place of Articulation. However, the P350 effect was reduced when non-coronal words were preceded by fragments with coronal Place of Articulation as compared to matching Place of Articulation.

(25)

word in the context of a preceding sentence or priming situation (Friedrich, 2005). The more expected this word, the less effort is needed, the smaller the N400 amplitude. However, in a fragment priming design, the N400 so far has not shown any asymmetric effects. N400 amplitudes are reliably enhanced in the control conditions, where the priming stimulus is completely unrelated from the target, as compared to conditions with identical priming.

Reactions to prime stimuli that slightly deviate from the identical prime can not be differentiated from reactions to situations of identical priming on grounds of the N400 amplitude, no matter whether they cause a nomismatch or a mismatch condition according to the FUL model (Friedrich, 2005).

To sum up, in a cross modal fragment priming experiment, the following responses are expected: In a matching or at least not mismatching situation, the subjects should respond significantly faster to the target words and their P350 difference wave amplitudes should be more positive than in a mismatching or unrelated condition. Furthermore, the N400 amplitude should be more negative in the unrelated control condition than in the experimental conditions.

1.6.3 Experimental Design and Hypotheses for the Present EEG Study 1.6.3.1 In Search for a Phenomenon that Allows Insight

As reported before, the aim of the present study was to test the mental representations of the vowel [] in Turkish and German. The assumption was, that [] is specified as [LOW] in Turkish, while it remains unspecified in German. In order to base this assumption on some experimental evidence, a cross modal fragment priming study was designed. Naturally, an [] would prime itself in any language that has this sound. The crucial question in this experiment was, whether an [] would be able to activate the lexical representation of an [] in the mental lexicon. An [] is definitely produced and perceived as a high, coronal vowel in both languages. In German, it is also assumed to be stored as [HIGH], while for Turkish there is controversy whether it is stored as [HIGH] or as unspecified for Tongue Height, as illustrated in section 1.6.1.2. Table 1.2 lists the place features that are extracted from the speech signal and those that are assumed to be stored in the mental lexicon for the vowels [] and [].

(26)

Table 1.2: Surface forms and underlying representations of the vowels [] and [] in Turkish and German. “Signal” refers to the place features extracted from the acoustic signal,

“Representation” refers to the place features stored in the mental lexicon.

Turkish:

Signal Representation [] [COR][HIGH] [HIGH] or []

[] [COR] [LOW] German:

Signal Representation [] [COR][HIGH] [HIGH]

[] [COR] []

It is obvious from Table 1.2 that the place feature [CORONAL] appears only in the signal, but is not represented in the mental lexicon. In case of the [], no Tongue Height feature is extracted from the signal, since it is not produced as a low vowel, but rather as a mid one. In fact, [] does not differ in production between the two languages, but only in its representation. Table 1.3 shows the activation pattern that is expected, as these vowels are matched onto their own mental representation. Since there are features in the signal that are not specified in the lexicon, and also features in the mental lexicon that are not extracted from the signal, the result is a nomismatch in all cases. In fact, a matching pattern can be given for every single feature of a segment. In order to keep matters less complex, in the table only one overall matching pattern is given. That is, as soon as one nomismatch occurs, the overall matching condition will be assigned a nomismatch-label. The same holds true for a mismatch.

(27)

Table 1.3: Matching pattern that is expected as the vowels [] and [] are mapped onto their own mental representation. “Signal” refers to the place features extracted from the acoustic signal, “Representation” refers to the place features stored in the mental lexicon.

Prime: [] - Target: []

Signal of [] Representation of [] Matching

Turkish [COR] [LOW] nomismatch

German [COR] [] nomismatch

Prime: [] - Target: []

Signal of [] Representation of [] Matching Turkish [COR][HIGH] [HIGH] or [] nomismatch German [COR][HIGH] [HIGH] nomismatch

Crucial for the following experiment was the difference in expected matching patterns between Turkish and German, as an [] in the signal is mapped onto the mental representation of an [] and vice versa. This scenario is illustrated in Table 1.4.

Table 1.4: Matching pattern that is expected as the vowels [] and [] are mapped onto each other’s mental representation. “Signal” refers to the place features extracted from the acoustic signal, “Representation” refers to the place features stored in the mental lexicon.

Prime: [] - Target: []

Signal of [] Representation of [] Matching Turkish [COR] [HIGH] or [] nomismatch German [COR] [HIGH] nomismatch Prime: [] - Target: []

Signal of [] Representation of [] Matching Turkish [COR][HIGH] [LOW] mismatch

German [COR][HIGH] [] nomismatch

An [] in the signal that is mapped onto an underlying [] also results in a nomismatch situation in both languages because no Tongue Height Feature is extracted from the signal.

Note that in this case it doesn’t make a difference whether Turkish [] is represented as

(28)

[HIGH] or underspecified. However, as an [] is presented in the signal, the extracted Tongue Height feature [HIGH] will mismatch with the underlying representation of [] as [LOW] in Turkish. For German once more a nomismatch is expected due to the underspecified Tongue Height feature.

1.6.3.2 Converting Abstract Assumptions into Concrete Predictions

In order to test these assumptions, a cross modal fragment priming design was set up. For this purpose, Turkish and German words with an [] or an [] as the nucleus of their first syllable were collected, thirty of each group. Since in Turkish tense [e] and tense [i] do not exist, German stimuli were also restricted on their lax counterparts [] and []. This was done to assure that the German and Turkish items did not differ in vowel quality. The design included three conditions: In the identity condition, the first syllable of the target word was used as the auditory prime. The related condition differed from the identity condition in that the vowel of the prime syllable was altered. That is, had it contained an [], this was replaced by an [] and vice versa. The syllable that was newly created in this way was not allowed to form the first syllable of any existing word in the language in question. The aim of this related condition was to see whether it would still prime the target word. In order to be able to unequivocally interpret the results, this prime item was not wanted to activate a cohort on its own. In the unrelated condition, the target word was preceded by an entirely different priming stimulus that shared no segments with its target.

Table 1.5: German and Turkish items as an illustration of the conditions realised in the experiment.

Conditions Identity Related Unrelated

bech-Becher (‘mug’) bich-Becher zan-Becher German

skiz-Skizze (‘sketch’) skez-Skizze bag-Skizze teh-tehdit (‘threat’) tih-tehdit fab-tehdit Turkish

bib-biblo (‘library’) beb-biblo nok-biblo

(29)

Hypotheses for German

Predictions for German state that in the identity as well as in the related conditions the auditory prime activates a cohort that includes the target word.

Reaction Time: German subjects are expected to be faster in telling that the target is an existing German word if it is preceded by an identical or related prime fragment and slower if the target follows an unrelated fragment.

P350: Both, identity and related condition are expected to differ significantly from the unrelated condition. More precisely, approximately 350 ms after stimulus onset, the amplitudes of the identical and related conditions should be significantly more negative than the amplitude of the unrelated condition. No difference in amplitude is predicted between the identical and the related conditions. Such a result would indicate that in both conditions the target word has been automatically activated in the mental lexicon by the preceding prime fragment.

N400: Also the N400 amplitude should be reduced for the identity as well as for the related condition as compared to the fully unrelated condition due to higher expectedness of the target in the former two conditions.

Symmetry: For the German subjects these predictions hold true irrespective of the order of presentation. This means, an [] in the related prime should activate its corresponding []- word as well as an []-prime is expected to activate an []-word.

Hypotheses for Turkish

For the Turkish subjects, similar outcomes are expected for the identity conditions and for the related condition in which an []-fragment primes its corresponding []-word. In the related condition with reverse order, however, the []-fragment is predicted to mismatch with the representation of the corresponding []-word (see Table 1.4).

Reaction Time: Long reaction times, comparable to those of the unrelated condition, are expected for the related condition where an []-fragment precedes an []-word. In the related condition with reversed prime-fragment order and in the identical condition reactions will be faster.

P350: ERP amplitudes should reveal less negative values for the mismatching related condition than for the identical condition and the related condition resulting in a nomismatch.

(30)

This is because an []-fragment is not assumed to trigger lexical activation of []-words. The unrelated condition is assumed to have the least negative amplitude.

N400: The N400 will not differ between the two related conditions and the identical condition. It is simply expected to be enhanced for the unrelated control condition.

Asymmetry: Due to different matching patterns asymmetric priming results are expected in the two related conditions for Turkish subjects.

1.6.4 Restriction to German Due to Missing Turkish Subjects

While it was easy to find German subjects, it was impossible to recruit Turkish subjects who fitted the selection criteria. They were required to have Turkish as their only mother tongue, while speaking as little German as possible. Bilinguals were not allowed to take part in the study. Also A-levels were demanded in order to keep the educational level comparable to those of the German students. Unfortunately, in Konstanz no monolingual Turkish subjects could be recruited who had a high educational level and at the same time no idea of German.

Therefore the following part of this paper restricts itself onto the German population.

(31)

2 METHODS

2.1 Participants

Seventeen students, eight male and nine female, from the University of Konstanz participated in the EEG-experiment. One female participant had to be excluded because she had misunderstood the instructions. The remaining sixteen participants were between 20 and 38 years old (mean age = 25.5 years, std = 4.53). All were right handed. Handedness was assessed with the 10-items-version of the Edinburgh-Handedness-Inventory (Oldfield, 1971).

Lateralisation-Quotients (LQ) ranged between 60 and 100 (mean LQ = 88.1). All Participants were monolingual native German speakers. None of them suffered from a neurological disorder or had uncorrected auditory or visual impairments.

The subjects were recruited by advertising on the university’s noticeboards.

Furthermore, members of a data-pool of students interested in participating in experiments were also asked to participate. For their participation they were paid 15 Euros.

Participants were informed orally and in writing about the EEG-procedure and gave informed consent.

2.2 Stimuli

Since the experimental design was cross modal fragment priming, the target words were presented visually on a monitor, preceded by auditory word fragments. The carrier words for the prime-fragments were recorded in a sound-attenuated chamber. All stimuli were read by a male native speaker from southern Germany. Speech signals were recorded with a Sennheiser MD421U microphone, amplified using a DAT-recorder (Tascam DA-P1) and saved directly on computer with a sampling rate of 44.1 kHz and a 16 bit resolution. Off-line editing was performed with Cool Edit 2000 (© Syntrillium Software Corporation, Phoenix, AZ). The prime fragments consisted of the first syllable of these words. These were cut with Cool Edit at syllable boundaries before the transition to the next segment. Peak amplitude was normalised to 70% of the maximum value of the sample.

(32)

2.2.1 Target Words

Experimental stimuli were 60 disyllabic German words of the form CVCCV(C), (see Appendix A for German words and pseudowords). In 30 of the words, the nucleus of the first syllable was an [], the other 30 syllables had an [] as first nucleus. These first syllables of the carrier words were not allowed to introduce existing German words when the [] was exchanged by an [] and vice versa. For example, bech- is the beginning of the word Becher (‘mug’), while there is no German word starting with *bich-. Similarly, skiz- is the first syllable of Skizze (‘sketch’), but no German word has *skez- as its first syllable.

A CVC-syllable structure had to be chosen as an exclusion criterion in order to find an adequate number of items. This is justified by former experimental findings that propose that syllable structure matters in speech recognition. In a cross modal fragment priming study reaction times and the ERP P350 were sensitive to syllable structure (Friedrich, Friederici &

Alter, under revision). Auditory prime fragments that contained the same phonemes but differed in syllable structure from their targets (e.g. fahn-Fah.ne (flag)) caused longer reaction times and enhanced P350 amplitudes as compared to primes that were identical to the first syllable of the target (e.g. fahn-Fahn.der (investigator)). Comparable behavioral results have been obtained in a cross modal semantic priming study (Tabossi, Collina, Mazetti & Zopello, 2000) and in studies using the syllable detection paradigm (Mehler, Dommergues, Frauenfelder & Segui, 1981).

Therefore, several of the experimental syllables that should not have been beginnings of German words, can be encountered as such if syllabification is ignored. E.g. the first syllable of Gipfel (‘peak’) is gip-. No German word has gep- as its first syllable. However, the sequence gep- can appear as the beginning of German words like Gepard (‘cheetah’).

Only the syllable structure is CV instead of CVC.

Words with [] and words with [] should not differ from each other with respect to word length, frequency, abstractness, and size of the cohort activated by their first syllable.

Word length was defined as number of segments. The []-words had a mean length of 6.14 segments (std = 1.14) and []-words consisted on the average of 6.55 segments (std =

Referenzen

ÄHNLICHE DOKUMENTE

Word guessing and individual differences over time Although results from all three original papers referred to in this dissertation (Studies I–III) confirmed the usefulness of the

La portée qu´il convient de recon- naître aux dispositions conjointes de la règle 56(1) et de l´article 99(1) de la CBE d´une part et aux dispositions conjointes des

Concerning engine types, we assume that the energy required for the propulsion of the mining and transport vessels is generated by the ship’s main engine

Linking model design and application for transdisciplinary approaches in social-ecological

While both models simulated reduced soil water content and above-ground biomass in response to drought, the strength and duration of these responses differed4. Despite

Effects of electrokinetic phenomena on bacterial deposition monitored by quartz crystal microbalance with dissipation

The world needs effective thermal insulation of buildings for pollution control and energy savings. Optimum thermal, fire and acoustic insulations are achieved by using

"Community Medicine" aufgebaut. Ein Eckpfeiler dieses Schwerpunktes ist die Integration der Problemstellungen der Lehre, Forschung und medizinischen Versorgung.