• Keine Ergebnisse gefunden

Step 1: The Determination of the Boundaries of Discourse Chunks The majority of the studies comprising larger collections of sign language

CHAPTER 4: METHODOLOGY

4.2. Data Annotation

4.2.2. The Process of Annotation and the Top-Down Approach

4.2.2.1. Step 1: The Determination of the Boundaries of Discourse Chunks The majority of the studies comprising larger collections of sign language

data, corpus studies and sign language transcription have to deal with many practical issues, one of which is: how does one determine sentence boundaries in signed languages ? (Crasborn 2007, p. 104). Hansen & Heßmann (2007) indicate that the form determining sentence boundaries can have one or a combination of these functions: prosodic, semantic, textual, or pragmatic (p. 157).

One of the ways to determine the boundaries is to explore prosodic components, i.e. dividing the utterances into ‘intonational phrases’ (IP) and much smaller units, namely ‘phonological phrases’ and so on (for a detailed prosodic hierarchy see Sandler & Lillo-Martin 2006, adapted from Nespor & Vogel 1986).

However, Crasborn (2007) points out an important issue: the inconsistency of the nonmanual cues marking sentence boundaries. For instance, Hansen & Heßmann’s (2007) investigation into sentence boundaries in German Sign Language (DGS) using the TPAC (topic, predications, adjuncts and conjuncts) analysis, raises the issue of the inconsistency in which (non)manual units mark the final (also internal) boundaries, which are specified by palm-up, head nod, hold, blink and change of direction. Ormel & Crasborn (2012) conclude that signers cannot determine sentence boundaries with the aid of a prosodic unit.

In the sense of Nespor & Sandler (1999) and Sandler & Lillo-Martin (2006), if we study IP boundaries, one can see that the boundaries can be marked by both manual and nonmanual units. The most frequent accompanying nonmanual units can be listed as follows: (i) eyebrows, (ii) blinks, (ii) head and body movements. On the other hand, prominence, palm-up, and hold can manually mark such boundaries, as well (see the detailed investigations of Ormel &

Crasborn 2012 and Fenlon 2010).

Eyebrow movements can roughly be distinguished as (i) eyebrow raise (ii) neutral eyebrow and (iii) furrowed eyebrows (Wilbur 2000). Brow raise in ASL is known to mark syntactic constructions such as topics, left dislocations, conditionals, relative clauses, wh-clauses etc. However, brow raise is not used consistently in such constructions, i.e. brow raise can be applied to various linguistic structures which may bring out old information or new information (Wilbur & Patschke 1999). Wilbur (2000) aligns neutral eyebrows to assertions and furrowed eyebrows to wh-questions. Eyebrow movements seem to function as domain markers, rather than boundary markers. The beginning and end of the brow raise can identify the location of the IP boundaries (as for British Sign Language see Fenlon 2010).

Eye blinks seem to be strong boundary markers in ASL and ISL (Baker &

Padden 1978; Wilbur 1994 and Nespor & Sandler 1999). Nespor & Sandler (p.

165) indicate that the striking similarity between breathing in spoken language and blinking in signed language, both of which function as marking boundaries, even though they are in fact a part of the autonomous biological system. Wilbur (1994a) discriminates between two types of eye blinks: (i) involuntary/periodic eye blinking and (ii) voluntary blinks (revised from Stern & Dunham 1990). Reflexive eye blinks are not included in Wilbur’s work since they are not supposed to have any linguistic function, i.e. as boundary markers. Periodic blinking marks boundaries (i.e. syntactic phrases, prosodic phrases, discourse units and narrative units) while voluntary blinks are related to lexical signs (ibid, pp. 237-238).

However, Sze (2008) argues that periodic blinks and voluntary blinks may occur at the same time, as it is hard to define which blinks may occur at the end of lexemes and which ones mark intonational boundaries. She proposes further categories: (i) Type 1: Physiologically induced blinks, (ii) Type 2: Boundary-sensitive blinks, (iii) Type 3: Co-occurring with head movements and/or gaze change but not related to syntactic boundaries, (iv) Type 4: Voluntary/lexically related blinks/closures and (v) Hesitation and self-correction (p. 95). Type 1, Type 3 and Type 5 blinks are irrelevant to marking boundaries. Rather, Type 2 blinks may mark grammatical boundaries in her HKSL data. For instance, 46.74% of eye blinks occurred at the end of sentence/signing/conversation and in sum 59,1% Type 2 of eye blinks were used as boundary markers in Sze’s conversational data (p. 97). Herrmann (2009) provides similar results for DGS, where approximately 70% of blinks (i.e.

intonational phrases, phonological phrases and sentence initials) reflect prosodic breaks. According to Herrmann (2010, p. 33), there is consistency in terms of the frequency of eye blinks among the signers who participated in her study, but the occurrence of eye blinks as a prosodic boundary marker in DGS is not obligatory but organized (Nespor & Sandler 1999). However, Herrmann states again that there is more than one marker indicating prosodic boundaries. Fenlon (2010) also states that 56% of the blinks collected in his data have the function of marking an IP boundary; however, he too comes to the conclusion that this result is not

enough to establish blinks as a sole boundary marker. In sum, it is inadequate to claim that blinks alone are marking those boundaries.

Head movements can be domain markers, for example a headshake can act as negation in ASL, (Wilbur 2009) and in DGS (Pfau & Quer 2010); and head tilt can do so in TİD (Zeshan 2003). However, head movements can also give a clue to boundaries in ISL (Nespor & Sandler 1999; Sandler & Lillo-Martin 2006).

Wilbur (2009) lists possible functions of head nods in ASL: (i) single head nod as boundary marker, (ii) repetitive head nod as a focus marker and (iii) head nod as assertion (p. 254). Fenlon (2010) shows that head movements are also observed to be IP boundaries in BSL, i.e. single head movement (77%), repeated head movement (25%) and head nods (21%) (p. 102). Hansen & Heßmann (2007) imply that not every sentence boundary is marked by a head nod in particular but sometimes head nod occurrences in DGS accompany ‘palm-up’ gestures.

Furthermore, body leans can function as boundary markers (Nicodemus 2009). In addition, Fenlon (2010) found that torso movement can act as an IP boundary marker in BSL but leans can also signify a ‘narrative function’ (i.e. a role shift). According to Fenlon, if torso leans as narrative functions are left out, 36% of torso movements in his data mark IP boundaries. Fenlon (2010) notes a difference in the frequency of boundary markers occurring in different discourse modes (i.e. in his case, narratives).

On the other hand, Nespor & Sandler (1999) indicate the prominence of signs in ISL which are located at the end of phonological phrases. The manual elements (i) reduplication, (ii) hold, (iii) pause generally mark prominence. Wilbur (1999) shows also that prominence occurs on the signs at the boundaries. Hansen

& Heßmann (2007) show that pauses do not have a significant role in determining sentence boundaries in DGS; however, they found that a hold can be one of the boundary markers.

In another study, Fenlon, Denmark, Campbell & Woll (2007) asked six BSL signers and six non-signers to determine the boundaries in both BSL and Swedish Sign Language (SSL) narratives. Their study reveals that the knowledge of signed language does not play a big role in determining those boundaries.

Similar nonmanual markers occur at ‘strong’ IP boundaries38 in analyses in both sign languages (ibid, p. 195). For instance, the most frequently observed cues are pauses, drop hands and holds (ibid, p. 190).

Related to Turkish Sign Language, Arık (in progress) claims that there is a correspondence between sentence types and nonmanuals marking sentence boundaries in TİD. His data set includes 15 native TİD signers, who were asked to narrate their life stories. After analyzing the data, he investigated 96 declarative, 36 negative and 45 interrogative sentences. In Arık’s data, eye blinks are mostly observed in declarative sentences. 22 tokens out of 96 tokens seem to be marked by eye blinks as a sentence boundary marker (final-boundary). The percentage is relatively low for accepting eye blinks as sentence boundary markers. He states also that blinks cannot be a nonmanual marker at the NP level in TİD. On the other hand, in the same data, head nod is rarely used in negative sentences; but many more head nods are realized at the final phrases compared to sentence-initial position and after the first element. Arık states that head nods may function as a boundary marker. When head-shake movements in this data are investigated, they sometimes occur at the end of negative and interrogative sentences. Another nonmanual marker, head tilt, is prominent at the end of negative sentences, which can be related to nonmanual expressions accompanied by the negation particle sign DEĞİL. Hand down seems to be the strongest boundary marker among the other nonmanuals: 44 tokens out of 96 declarative sentences, 27 tokens out of negative sentences and 21 out of interrogative sentences are hand down (Arik in progress, p. 16). He summarizes that strong candidates for sentence boundaries are hand down and blinks, and as for negative sentences, head tilt and hand down represent possible sentence boundaries; whereas, hand down and head-shake are most frequently observed in interrogatives.

In conclusion, it is fairly difficult to define clear ‘sentence’ boundaries in signed languages. There are many factors influencing the inconsistencies among sign languages (Table 4.3). For instance, Fenlon et al. (2007, p. 190) show the

38 Due to the non-isomorphism between syntactic and prosodic structures (Nespor & Vogel 1986), Fenlon and his colleagues only focused on prosodic structures and therefore only investigate intonational phrases (IP).

similarities in ‘strong’ cues between BSL and SSL. However, the boundaries have been defined by BSL native signers. Fenlon, et al. did not ask SSL native signers to examine the boundaries. If they had, would the results have been different? This is far from clear. Furthermore, Fenlon et al. indicate that discourse modes may have an influence, i.e. head rotation may be more frequently present in narratives.

On the other hand, methodology and the size of data may have an influence as well.

For example, Hansen & Heßmann (2007, p. 168) present the occurrence of the prosodic cues at the final phrases (the occurrences are converted into ratios in Table 4.3). They analyze only 20 sentences in DGS. Another example is shown by Arık, in which he shows that sentence types may influence the preferences of boundary markers occurring at sentence final position in TİD (the occurrences are shown in ratios in Table 4.3).

BSL SSL DGS TİD Head Rotation 25% 12%

Head Nod 45% 36% 5% 16%

Head Head Movement 6% 40% 9%*

Head Back 18%

Blinks 28% 16% 45% 12%

Eye Eye-gaze 9% 11% 70%

Eyebrows 33% 29%

Hold 88% 44% 10% 6%

Pauses 50% 57%

Manual Drop Hands 100% 100% 49%**

Palm-up 30%

Long transition 30%

Table 4.3 - Comparison of ‘strong’ IP boundaries in BSL and SSL (Fenlon et al.

2007, p. 190) and final phrase boundaries in DGS (Hansen & Heßmann 2007, p.

168 – occurrences are converted into percentages) as well as in TİD (Arik in progress).*headshake **hand down

Since the “sentence” boundaries in sign languages remain relatively vague as mentioned above, further research is needed to understand how the “parts of speech” come together. It will also be necessary to include contributions from the areas psycholinguistics (including statistical learning) and neurolinguistics. For example, conducting a thorough analysis on how children who grow up signing acquire sign order, the structures of higher-order embedded sentences and prosodic elements in, for example, relative clause constructions could also enable us to better understand IP/sentence boundaries. Unfortunately, there have not been such attempts regarding the acquisition, production or processing of TİD. Studies on

statistical cues, such as the transitional probability (TP)39 between syllables and words (Gómez & Gerken 1999, Saffran & Wilson 2003 as cited in Lany & Saffran 2013, p. 237) revealed that hearing 12-month-old children are sensitive to the differnce between grammatical and ungrammatical phrases, which might be a piece of evidence that they learned the probabilistic co-occurrence relationships between words (i.e. implicit language learning). Psycholinguistic studies in sign language processing have also been seeking to understand how sign language is acquired, perceived and processed (for a good summary on processing in sign languages, see Dye 2012). Dye summarizes that there has been intensive focus on formal structures of sign language, mental representations and iconicity, however,

[...] understanding sign language requires much more than the comprehension of individual signs. The ways in which those signs are combined to form sentence-like or phrase-like blocks of meaning is also important, as is the way in which these blocks of meaning combine to provide an understanding at the level of a whole discourse. Studies of such higher-level sign processing are few (Morgan 2002, 2006) and represent a clear need for future study. (Dye 2012, p. 705)

As more research happens in sign language acquisition, processing and production as well as implicit language learning in particular, we may gain new information on how sign languages are structured and how the boundaries between phrases and clauses can be determined.

This dissertation deals with the abovementioned challenges by narrowing down discourse units into smaller units, i.e. possible intonational phrases (Nespor

& Vogel 1999, Sandler & Lillo-Martin 2006), based on various nonmanual and manual cues. Besides the prosodic cues, the meaningful smaller units are also

39 The transitional probability (TP) of a co-occurrence relationship between two elements, X and Y, is computed by dividing the frequency of XY by the frequency of X. This yields the probability that if X occurs, Y will also occur (see Lany & Saffran 2013, p. 235).

based on semantic intuitions. Whether they are realized as sentences or not, is beyond the scope of the dissertation. Therefore, it is preferred to label these smaller units as discourse chunks. The next step is to mark those chunks covering possible RCCs in order to investigate them deeper.

4.2.2.2. Step 2: Selecting the Chunks Which Include Potential Relative