• Keine Ergebnisse gefunden

Right Anterior Activation in the Time Course of the P350

4 DISCUSSION

4.3 Right Anterior Activation in the Time Course of the P350

In right-handed humans language is assumed to be strongly lateralised to the left hemisphere.

The role of the right hemisphere in speech recognition is not well investigated. Usually the two hemispheres are assumed to process different aspects of the language input (Phillips, Pellathy & Marantz, 2000; Federmeier & Kutas, 1999; Shapiro & Danly, 1985). In crossmodal fragment priming studies conducted so far the right anterior region did not contribute any effects to the data. In the present study it showed a curious pattern of results.

In the 200 to 300 ms time window it did not reveal any significant differences between the conditions. This suggests that in this early phase of language processing the right hemisphere is not active, or at least not to the same extent as the left hemisphere. Perhaps it assists in processing language input but simply does not access the mental lexicon and hence is deprived of detailed information that allowed priming effects to occur.

In the 300 to 500 ms time window, the activation pattern over the right anterior region differed between []-words and []-words. The []-words showed no significant effect at all. This gives the impression that the right anterior brain region is not sensitive to cross modal fragment priming. However, this does not apply for the []-words. In the right anterior Region of Interest, the identical condition differed significantly from the related as well as the unrelated condition, whereas related and unrelated conditions did not differ. The amplitude of the identical condition was more negative than the amplitudes of the related and unrelated conditions. Thus ERPs over the right anterior hemisphere differentiated a match in terms of segments from a mismatch, but only for []-words. The FUL model does not assume that segments as wholes are stored in the mental lexicon, but rather that segments are represented in a more abstract fashion in terms of features. This assumption is supported by the patterning of the results over the left anterior region. However, the activation pattern over the right anterior hemisphere can be explained more straight forwardly by referring to

can not explain the fact that the right anterior hemisphere was sensitive only to a segmental match of the []-words but not of the []-words.

There is an alternative explanation for these results: In an attempt to keep matters simple, so far only an overall matching pattern has been considered. That means a

“nomismatch” was expected for the identical as well as the related conditions. As already mentioned in the introduction, several features are extracted simultaneously from the acoustic input stream and mapped onto lexical entries. For each feature, a match, a mismatch or a nomismatch can result. So far the term “nomismatch” was assigned to a condition as soon as one nomismatch occurred. This is not basically wrong, because as soon as a nomismatch occurs, it can not ultimately result in a complete match, no matter how many other features of the segment result in a match; and at the same time it will not end up as a mismatch, no matter how many nomismatches are encountered in a segment. That means it is assumed that a mapping process that produces any number of nomismatch conditions still results in the activation of the lexical item in question. However, one could imagine that the quality of fit between the features in the signal and the entry in the mental lexicon does affect the level of activation. An acoustic input can activate a large cohort of items in the mental lexicon. Those items that match the input completely or to a large extent are probably activated more strongly than items that do neither match in a lot of features nor mismatch. A formula has been developed (Reetz, 1998; Lahiri & Reetz, 2002) that calculates the level of activation on the basis of the number of matching features with respect to those specified in the mental lexicon and the number of the features extracted from the acoustic signal. It is not claimed here that the brain works according to this formula, but it would make sense to weight possible word candidates with respect to their level of agreement with the speech signal. The formula is as follows:

matching features 2

Score = ___________________________________________________

features from signal x features in lexicon

In order to calculate this score for the vowels used in the experiment, all features that are extracted from the signal and those that are stored in the lexicon have to be listed. This is done in Table 4.4. Next, Table 4.5 gives the scores that result for the identical and related conditions.

Table 4.4: List of features that are assumed to be extracted from the signal and represented in the mental lexicon of the two crucial vowels []and []. cor = coronal; hi = high; voc = vocalic; rtr = retrieved tongue root;

I E

Signal COR HI VOC RTR COR VOC RTR Representation HI VOC RTR VOC RTR

Table 4.5: Scores that result for []-words and []-words as the formula is applied to the identical and related conditions.

Example Signal Representation Formula Score

skiz - Skizze [] /i/ 9/(4x3) 0.75

bech - Becher [] /e/ 4/(3x2) 0.67

bich - Becher [] /e/ 4/(4x2) 0.50

skez - Skizze [] /i/ 4/(3x3) 0.44

The highest score is obtained for the identical condition of the []-words, followed by the identical condition of the []-words. The lowest score results for the related condition of the []-words causing a big difference in level of activation between identical and related condition. For the []-words the identical and related conditions do not differ as much. The identical condition of the []-words has the highest activation score and the most negative mean amplitude in the 300-500 ms time window. Its amplitude is not only significantly higher than the amplitude of the related condition of the []-words but also compared to the identical condition of the []-words (F = 5.15, p = .04). One could speculate that the right anterior region is not sensitive to fine-grained differences in activation level, but kicks in as a certain threshold of activation is reached. It could do so for giving additional power to promising items of the cohort. At least this way one can explain the different ERP results for []-words and []-words. Only the identical []-words had sufficient matching features as compared to all available features in the signal and the representation for activating additional mechanisms in the right anterior region. This is highly speculative. It is one possibility of interpreting the data. However, it has to be questioned whether such a

right-unclear, what kind of information would be fed into such a mechanism and whether it would have access to the mental lexicon or not.

The effect that was observed here could also be a flaw of the ROI-approach which is based on the arbitrary assignment of electrode positions to Regions of Interest. It can not be taken for granted that the pattern found in the right anterior Region of Interest can be attributed to activation in the right hemisphere.